Skip to main content

COMPENSATORY DEBIASING FOR GENDER IMBALANCES IN LANGUAGE MODELS

Tae-Jin Woo (Korea University); Woo-Jeoung Nam (Kyungpook National University); Yeong-Joon Ju (Korea University); Seong-Whan Lee (Korea University)

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
07 Jun 2023

Pre-trained language models (PLMs) learn gender bias from imbalances in human-written corpora. This bias leads to critical social issues when deploying PLMs in real-world scenarios. However, minimizing bias is limited by the trade-off due to the degradation of language modeling performance. It is particularly challenging to detach and remove biased representations in the embedding space because the learned linguistic knowledge entails bias. To address this problem, we propose a compensatory debiasing strategy to reduce gender bias while preserving linguistic knowledge. This strategy utilizes two types of sentences to distinguish biased knowledge: stereotype and non-stereotype sentences. We assign small angles and distances to pairs of representations of the two gender groups to mitigate bias for the stereotype sentences. At the same time, we maximize the agreement for the representations of the debiasing model and the original model to maintain linguistic knowledge for the non-stereotype sentences. To validate our approach, we measure the performance of the debiased model using the following evaluation metrics: SEAT, StereoSet, CrowS-Pairs, and GLUE. Our experimental results demonstrate that the model fine-tuned by our strategy has the lowest level of bias while retaining knowledge of PLMs.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00