Demonstration Of Quantum Circuits Learning For Spoken Commands Recognition
Chao-Han Huck Yang, Jun Qi, Samuel Yen-Chi Chen, Pin-Yu Chen
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:15:08
With recent developments of quantum computation hardware, how to design a learning algorithm associated with the quantum advantages (e.g., feature compression and entanglement) but being compatible with noisy intermediate-scale quantum devices (5 to 50 qubits) is an open problem for the speech and signals processing community. In this demonstration, we aim to provide an interactive demonstration with the audience on the newly accepted quantum convolution network architecture [1] for spoken commands recognition (e.g., character-level) and classification (e.g., keywords). Furthermore, we will showcase some recent commercial and academic accessible cloud platforms, included quantum hardware (e.g., IBM Qiskit, Amazon Braket), and TPU (e.g., Google Cirq) or CPU quantum hardware simulation (e.g., Xanadu) [3] with our open-source codebase [2]. To the best of the authors' knowledge, this is the first work that combines quantum circuits learning (with less quantum error correction) to builds a new hybrid system for speech and acoustic modeling. We expect that the demonstration would provide a good overview of quantum machine learning and software implementations to the general ICASSP community, and more especially for the speech, acoustic, and quantum signal processing interest groups. We also provided a Colab implementation [2] to interact with the audiences. 1. Decentralizing Feature Extraction with Quantum Convolutional Neural Network for Automatic Speech Recognition, IEEE ICASSP 2021, to appear 2. Open source system https://github.com/huckiyang/QuantumSpeech-QCNN 3. Media cover by third-party open-source software https://twitter.com/pennylaneai/status/1369136622726508545?s=20