Multi-modal domain generalization for Cross-Scene Hyperspectral Image Classification
Yuxiang Zhang (Beijing Institute of Technology ); Mengmeng Zhang (Beijing Institute of Technology); Wei Li (Beijing Institute of Technology, Beijing, China); Ran Tao (Beijing Institute of Technology)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
The large-scale pre-training image-text foundation models have excelled in a number of downstream applications. The majority of domain generalization techniques, however, have never focused on mining linguistic modal knowledge to enhance model generalization performance. Additionally, text information has been ignored in hyperspectral image classification (HSI) tasks. To address the aforementioned shortcomings, a Multi-modal Domain Generalization Network (MDG) is proposed to learn cross-domain invariant representation from cross-domain shared semantic space. Visual and linguistic features are extracted using the dual-stream architecture, which consists of an image encoder and a text encoder. A generator is designed to obtain extended domain (ED) samples that are different from SD. Furthermore, linguistic features are used to construct a cross-domain shared semantic space, where visual-linguistic alignment is accomplished by supervised contrastive learning.