Skip to main content

ATTENTION LOCALNESS IN SHARED ENCODER-DECODER MODEL FOR TEXT SUMMARIZATION

Li Huang (Southwestern University of Finance and Economics); Hongmei Wu (Southwestern University of Finance and Economics); Qiang Gao (Southwestern University of Finance and Economics); Guisong Liu (Southwestern University of Finance and Economics)

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
06 Jun 2023

Text summarization is to generate a brief version of a given article while maintaining its essential meaning. Most existing solutions typically relied on the standard attention-based encoder-decoder framework, where each token in the source article, including redundancy, would be contributed to the decoder through the attention mechanism. It follows that how to filter out the redundant content becomes an important issue in the text summarization task. In this study, we propose a localness attention network, with simplicity and feasibility in mind, which circles different local regions in the source article as contributors in different decoding steps. To further strengthen the localness model, we share the semantic space of the encoder and decoder. The experimental results conducted on two benchmark datasets demonstrate the effectiveness and applicability of the proposed method in relation to several well-practiced works.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00