Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:14:29
11 Jun 2021

We consider the problem of designing a robust classifier in the presence of an adversary who aims to degrade classification performance by elaborately falsifying the test instance. We propose a model-agnostic defense approach wherein the true class label of the falsified instance is inferred by analyzing its proximity to each class as measured based on class-conditional data distributions. We present a k-nearest neighbors type approach to perform a sample-based approximation of the aforementioned probabilistic proximity analysis. The proposed approach is evaluated on three different real-world datasets in a game-theoretic setting, in which the adversary is assumed to optimize the attack design against the employed defense approach. In the game-theoretic evaluation, the proposed defense approach significantly outperforms benchmarks in various attack scenarios, demonstrating its efficacy against optimally designed attacks.

Chairs:
George Atia

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00
  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00
  • SPS
    Members: Free
    IEEE Members: $85.00
    Non-members: $100.00