-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 11:47
The problem of making the classifier design resilient to test data falsification is considered. In the literature, a few countermeasures have been proposed to defend machine learning algorithms against test data falsification, but a common assumption employed therein is that feature entries of test data are equally vulnerable to falsification. When test data entries consist of data collected from various sources such as different types of sensor devices, vulnerability levels of data entries to falsification attacks can differ significantly depending on how data creation and transmission procedures are secured. In this paper, we present an attack-cost-aware adversarial learning framework that takes into account the (potentially inhomogeneous) vulnerability characteristics of test data entries in designing an attack-resilient classifier. We demonstrate efficacy of the proposed approach using experiments with the MNIST handwritten digit database.