Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 06:33
22 Sep 2020

Machine learning applications have emerged in many aspects of our lives, such as for credit lending, insurance rates, and employment applications. Consequently, it is required that such systems be nondiscriminatory and fair in sensitive features user, e.g., race, sexual orientation, and religion. To address this issue, this paper develops a minimax adversarial framework, called features protector (FP) framework, to achieve the information-theoretical trade-off between minimizing distortion of target data and ensuring that sensitive features have similar distributions. We evaluate the performance of the proposed framework on two real-world datasets. Preliminary empirical evaluation shows that our framework provides both accurate and fair decisions.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00