FGCVQA: FINE-GRAINED CROSS-ATTENTION FOR MEDICAL VQA
Ziheng Wu, Xinyao Shu, Shiyang Yan, Zhenyu Lu
-
SPS
IEEE Members: $11.00
Non-members: $15.00
The application of Visual Question Answering (VQA) in the medical field has significantly impacted traditional medical research methods. A mature medical visual question answering system can greatly help the patients' diagnosis. The Visual Question Answering Model in the generic domain is not compelling enough for the feature alignment in medical image and text semantics because of the complex diversity of clinical problems and the difficulties in multi-modal reasoning. To solve these, we propose a model called FGCVQA. It is essential to consider the semantic alignment of the medical images and the language features. Specifically, We use the Cross-Modality Encoder to learn the semantic representation of medical images and texts. It improves the reasoning ability of multi-modal by considering the fine-grained property. The experimental results show that FGCVQA outperforms all previous dataset VQA-RAD methods for radiology images. FGCVQA effectively answers medical visual questions and can help doctors to make better clinical analyses and diagnoses.\;The source codes can be available at https://github.com/wwzziheng/FGCVQA.