Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
Poster 11 Oct 2023

Multi-instance multi-label active learning (MIMAL) usually uses example uncertainty and label correlation to select the most valuable example-label pairs, maximizing the learner's performance. However, the existing MIMAL solutions do not consider the correlation of example features when selecting example-label pairs. Here, this paper proposes a novel MIMAL framework that can effectively exploit the relationship between examples and features to reduce annotation cost. We first perform feature screening on the examples. It effectively eliminates the interference of useless features on the example to the annotations. Next, we quantify the correlation between features and examples as the basis for selecting example-label pairs. Finally, we query for the most likely positive subexample-label pair among the selected example-label pairs. The extensive experiments on multi-label datasets from diverse domains show that the proposed MIMAL can better save query cost and achieve superior performance than state-of-the-art MIMAL methods.