Skip to main content

Quantum transfer learning using the large-scale unsupervised pre-trained model WavLM-Large for synthetic speech detection

Ruoyu Wang (University of Science and Technology of China); Jun Du (University of Science and Technology of China); Tian Gao (iFlytek Research)

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
07 Jun 2023

The development of quantum machine learning demonstrates its quantum advantages over traditional deep learning, which promises to discover new patterns on supervised classification datasets. This work proposes a classical-to-quantum transfer learning system based on the large-scale unsupervised pre-trained model to demonstrate the competitive performance of quantum transfer learning for synthetic speech detection. We use the pre-trained model WavLM-Large to extract feature maps from speech signals, obtain low-dimensional embedding vectors through classical network components, and then jointly fine-tune the pre-trained model and classical network components with a variational quantum circuit (VQC). We evaluate our system on the ASVspoof 2021 DF task, and the experiments using quantum circuit simulations show that quantum transfer learning can improve the performance of the classical transfer learning baseline on the task.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00