Skip to main content

Tell Model Where to Attend: Improving Interpretability of Aspect-Based Sentiment Classification via Small Explanation Annotations

Zhenxiao Cheng (East China Normal University); Jie Zhou (Fudan University); Wen Wu (East China Normal University); Qin Chen (East China Normal University); Liang He (ECNU)

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
07 Jun 2023

Gradient-based explanation methods play an important role in interpreting complex neural networks for NLP models. However, the existing work has shown that the gradients of a model are unstable and easily manipulable. According to our preliminary analyses, we also find the interpretability of gradient-based methods is limited for complex tasks, such as aspect-based sentiment classification (ABSC). In this paper, we propose an Interpretation-Enhanced Gradient-based framework for ABSC via small explanation annotations, namely IEGA. Particularly, we first calculate the word-level saliency map based on gradients to measure the importance of the words in the sentence towards the given aspect. Then, we design a gradient correction module to enhance the model's attention on the correct parts (e.g., opinion words). Comprehensive experimental results on four benchmark datasets show that our IEGA can improve not only the interpretability of the model but also the performance and robustness.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00