Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
Lecture 11 Oct 2023

The limitations of a machine learning model can often be traced back to the existence of under-represented regions in the feature space of the training data. Data augmentation is a common technique that has been used to inflate training datasets with new samples to improve the model performance. However, these techniques usually focus on expanding the data in size and do not necessarily aim to cover the under-represented regions of the feature space. In this paper, we propose an Attention-guided Data Augmentation technique for Vision Transformers (ADA-ViT). Our framework exploits the attention mechanism in vision transformers to extract visual concepts related to misclassified samples. The retrieved concepts describe under-represented regions in the training dataset that contributed to the misclassifications. We leverage this information to guide our data augmentation process by identifying new samples and using them to augment the training data. We hypothesize that this focused data augmentation populates under-represented regions and improves the model's accuracy. We evaluate our framework on the CUB dataset and CUB-Families. Our experiments show that ADA-ViT outperforms state-of-the-art data augmentation strategies, and can improve the accuracy of a transformer by an average margin of 2.5% on the CUB dataset and 3.3% on CUB-Families.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00