Adversarially Robust Fairness-aware Regression
Yulu Jin (University of California, Davis); Lifeng Lai (UC Davis)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Fairness and robustness are critical elements of trustworthy machine learning systems that need to be addressed. Using a minimax framework, in this paper, we aim to design an adversarially robust fair regression model that achieves optimal performance in the presence of an attacker who is able to perform a rank-one attack on the dataset. By solving the proposed nonsmooth nonconvex-nonconcave minimax problem, the optimal adversary as well as the robust fairness-aware regression model are obtained. Based on two real-world datasets, numerical results illustrate that the proposed adversarially robust fair model has better performance on the poisoned dataset than other fair machine learning models in both prediction accuracy and group-based fairness measure.