ENTROPY BASED FEATURE REGULARIZATION TO IMPROVE TRANSFERABILITY OF DEEP LEARNING MODELS
Raphaël Baena (IMT Atlantique); Lucas Drumetz (IMT Atlantique); Vincent Gripon (IMT Atlantique)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
When dealing with signals, labeling a classification dataset implies to define classes that may approximate a smoother and more complicated ground truth. For example, natural images may contain multiple objects, only one of which is labeled in many vision datasets, or classes may result from the discretization of a regression problem where targets are continuous. Using cross-entropy to train deep models on such coarse labels is likely to roughly cut through the feature space, potentially disregarding the most meaningful such features, in particular losing information on the underlying fine-grain task. In this paper we are interested in the problem of solving fine-grain classification or regression, using a model trained on coarse-grain labels only. We show that standard cross-entropy can lead to overfitting to coarse-related features. We introduce an entropy-based regularization to promote more diversity in the feature space of trained models.