Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:07:11
12 May 2022

In order to achieve a general visual question answering (VQA) system, it is essential to learn to answer deeper questions that require compositional reasoning on the image and external knowledge. Meanwhile, the reasoning process should be explicit and explainable to understand the work- ing mechanism of the model. It is effortless for human but challenging for machines. In this paper, we propose a Hi- erarchical Graph Neural Module Network (HGNMN) that reasons over multi-layer graphs with neural modules to ad- dress the above issues. Specifically, we first encode the image by multi-layer graphs from visual, semantic and common- sense views since the clues that support the answer may exist in different modalities. Our model consists of several pre- defined neural modules that perform specific functions over graphs, which can be used to conduct multi-step reasoning within and between different graphs. Compared to exist- ing modular networks, we extend visual reasoning from one graph to more graphs. We can explicitly trace the reasoning process according to module weights and graph attentions. Experiments show that our model not only achieves state- of-the-art performance on the CRIC dataset, but also obtains explicit and explainable reasoning procedures.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00