Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:12:53
07 Oct 2022

Knowledge distillation is a process which uses a complex teacher model to guide the training of a smaller student model. The output from the teacher model's last hidden layer is commonly used as knowledge. This paper proposes a novel method on how to use this knowledge to guide the student model. Tanimoto coefficient is used to measure the length and angle information of sample pair. Knowledge distillation is conducted from two perspectives. The first perspective is to calculate a Tanimoto similarity matrix for every training sample pair within a batch for the teacher model, and then use this matrix to guide the student model. The second perspective is to calculate a Tanimoto diversity between the teacher model and the student model for every training sample and minimize the diversity. On FOOD101 and VOC2007 datasets, the top1-accuracy and mAP obtained by our method is higher than that of existing distillation methods.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00