Vision Transformer-based Feature Extraction for Generalized Zero-Shot Learning
Jiseob Kim (Seoul National University); Kyuhong Shim (Seoul National University); Junhan Kim (Seoul National University); Byonghyo Shim (Seoul National University)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Generalized zero-shot learning (GZSL) is a technique to train a deep learning model to identify unseen classes using the image attribute. In this paper, we put forth a new GZSL technique exploiting Vision Transformer (ViT) to maximize the attribute-related information contained in the image feature. In ViT, the entire image region is processed without the degradation of the image resolution and the local image information is preserved in patch features. To fully enjoy the benefits of ViT, we exploit patch features as well as the CLS feature in the extraction of the attribute-related image feature. In particular, we propose a novel attention-based module, called attribute attention module (AAM), to aggregate the attribute-related information in the patch features. From extensive experiments on benchmark datasets, we demonstrate that the proposed technique outperforms the state-of-the-art GZSL approaches by a large margin.