Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:06:32
11 Jun 2021

Visual food recognition is emerging as an important application in dietary monitoring and management in recent years. Existing works use large backbone networks to achieve good performance. However, these networks are not able to be deployed on personal portable devices due to large size and computation cost. Some compact networks have been developed, however, their performance are usually lower than the large backbone networks. In view of this, this paper proposes a joint distillation framework that targets to achieve a high visual food recognition accuracy using a compact network. As opposed to the more traditional one-directional knowledge distillation methods, the proposed knowledge distillation framework trains both the large teacher network and the compact student network simultaneously. The framework introduces a new Multi-Layer Distillation (MLD) for simultaneous teacher-student learning at multiple layers of different abstraction. A novel Instance Activation Mapping (IAM) is proposed to jointly train the teacher and student networks using generated instance-level activation map that incorporates label information for each training image. Experimental results on the two benchmark datasets UECFood-256 and Food-101 show that the trained compact student network achieves state-of-the-art performance at 83.5% and 90.4%, respectively, while achieving more than 4 times deduction regarding network model size.

Chairs:
Yonghee Han

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00