Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:07:46
11 Jun 2021

Recently, the attention-enhanced multi-layer encoder, such as Transformer, has been extensively studied in MachineReading Comprehension (MRC). To predict the answer, itis common practice to employ a predictor to draw information only from the final encoder layer which generates the coarse-grained representations of the source sequences, i.e., passage and question. Previous studies have shown that the representation of source sequence becomes more coarse-grained from fine-grained as the encoding layer increases. Itis generally believed that with the growing number of layers in deep neural networks, the encoding process will gather relevant information for each location increasingly, resulting in more coarse-grained representations, which adds the likelihood of similarity to other locations (referring to homogeneity). Such a phenomenon will mislead the model to make wrong judgments so as to degrade the performance. To this end, we propose a novel approach called Adaptive Bidirectional Attention, which adaptively exploits the source representations of different levels to the predictor. Experimental results on the benchmark dataset, SQuAD 2.0 demonstrate the effectiveness of our approach, and the results are better than the previous state-of-the-art model by 2.5%EM and 2.3%F1 scores.

Chairs:
Kai Yu

Value-Added Bundle(s) Including this Product