Skip to main content

3D Centroidnet: Nuclei Centroid Detection With Vector Flow Voting

Liming Wu, Alain Chen, Paul Salama, Kenneth Dunn, Edward Delp

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:11:15
04 Oct 2022

Knowledge distillation (KD) has been identified as an effective knowledge transfer approach. By learning from the outputs of a pre-trained, over-parameterized teacher network, a compact student network can be trained efficiently to achieve superior performance. Although KD has gained substantial successes, exposure to pre-trained models usually causes potential risks of intellectual property leaks. From a model stealing attacker's perspective, one can easily mimic the model functionality via KD, resulting in huge financial loss. in this paper, we propose a novel adversarial training framework called semantic nasty teacher, which prevents the teacher model from being copied by the attacker. in specific, we disentangle the semantic relationship in the output logits when training the teacher model, which is the key to success in KD. Experiment results show that neural networks trained with our approach only sacrifices little performance while canceling out the probability of KD-based model stealing.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00