Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:07:47
03 Oct 2022

Deep neural networks (DNNS) can easily overfit to noisy data, which leads to significant degradation of performance. Previous efforts are primarily made by label correction or sample selection to alleviate supervision problem. To distinguish between noisy labels and clean labels, we propose a meta-learning framework which could gradually elicit credible labels via meta-gradient descent step under the guidance of potentially non-noisy samples. Specifically, by exploiting the topological information of feature space, we can automatically estimate label confidence using a meta-learner. An iterative procedure is designed to select the most trustworthy noisy-labeled instances along with the clean data to generate pseudo labels. Then we train DNNs with pseudo supervision and original noisy supervision, which learns sufficiency and robustness properties in a joint learning objective. Experimental results on benchmark classification datasets show the superiority of our approach against state-of-the-art methods.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00