Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:10:07
12 May 2022

Lack of large-scale note-level label data is the major obstacle to singing transcription from polyphonic music. We address the issue by using pseudo labels from vocal pitch estimation models. The proposed method first converts the frame-level pseudo labels to note-level pseudo-labels through pitch and rhythm quantization steps. Then, it further improves the label quality through self-training in a teacher-student framework. To validate the method, we conduct various experiments. We compare two vocal pitch estimation models to verify their performance as pseudo-label generators. We explore two setups of teacher-student models with different data augmentation settings and also investigate the number of self-training iterations. The results show that the proposed method can effectively leverage large-scale unlabeled audio data. We also found that self-training with the noisy student model helps to improve performance. Finally, we show that the model trained with only unlabeled data has reasonable performances compared to previous works and, the model trained with additional labeled data, achieves higher accuracy than the model trained with only labeled data.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00