Optimization for Robustness Evaluation beyond Lp Metrics
Hengyue Liang (University of Minnesota); Buyun Liang (University of Minnesota); Ying Cui (University of Minnesota); Tim Mitchell (Queens College / CUNY); Ju Sun (University of Minnesota)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Empirical evaluation of the adversarial robustness of deep learning models involves solving non-trivial constrained optimization problems. Popular numerical algorithms to solve these constrained problems rely predominantly on projected gradient descent (PGD) and mostly handle adversarial perturbations modeled by the $\ell_1$, $\ell_2$, and $\ell_\infty$ metrics. In this paper, we introduce a novel algorithmic framework that blends a general-purpose constrained-optimization solver PyGRANSO, With Constraint-Folding (PWCF), to add reliability and generality to robustness evaluation. PWCF 1) finds good-quality solutions without the need of delicate hyperparameter tuning and 2) can handle more general perturbation types, e.g., modeled by general $\ell_p$ (where $p > 0$) and perceptual (non-$\ell_p$) distances, which are inaccessible to existing PGD-based algorithms.