Skip to main content

Adversarially Robust Fairness-aware Regression

Yulu Jin (University of California, Davis); Lifeng Lai (UC Davis)

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
04 Jun 2023

Fairness and robustness are critical elements of trustworthy machine learning systems that need to be addressed. Using a minimax framework, in this paper, we aim to design an adversarially robust fair regression model that achieves optimal performance in the presence of an attacker who is able to perform a rank-one attack on the dataset. By solving the proposed nonsmooth nonconvex-nonconcave minimax problem, the optimal adversary as well as the robust fairness-aware regression model are obtained. Based on two real-world datasets, numerical results illustrate that the proposed adversarially robust fair model has better performance on the poisoned dataset than other fair machine learning models in both prediction accuracy and group-based fairness measure.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00