Skip to main content

SORA: Scalable Black-box Reachability Analyser on Neural Networks

Peipei Xu (University of Liverpool); Fu Wang (University of Exeter); Wenjie Ruan (University of Exeter); Chi Zhang (University of Exeter); Xiaowei Huang (Liverpool University)

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
06 Jun 2023

The vulnerability of deep neural networks (DNNs) to input perturbations has posed a significant challenge. Recent work on robustness verification of DNNs not only lacks scalability but also requires severe restrictions on the architecture (layers, activation functions, etc.). To address these limitations, we propose a novel framework, SORA, for scalable blackbox reachability analysis of DNNs. SORA can work on a broad class of neural network structures, including those networks with very deep layers and a huge number of neurons with nonlinear activation functions. Based on Lipschitz continuity, SORA verifies the reachability property of DNNs with a novel optimisation algorithm, and has a global convergence guarantee. Our method does not require access to the inner structures of the DNNs. Our experimental results show that, compared to existing verification methods, SORA shows superior performance in terms of both efficiency and scalability, especially when handling a deep neural network that has very deep layers and a large number of neurons with various types of nonlinear activation functions.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00