Ziye Ma

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2020-198

December 2, 2020

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-198.pdf

In this thesis report, we consider certifying the robustness of pre-trained ReLU neural networks against adversarial input perturbations. To be more specific, we only consider the semidefinite programming (SDP) approach of certification in which the original non-convex certification problem is relaxed as a SDP programming.

The SDP approach enjoys good certification percentage because the relaxation gap that it poses is naturally smaller than that of other methods, including the Linear Programming (LP) based approaches and the Mixed Integer Programming (MIP) based approaches. However, even SDP approaches suffer from large relaxation gaps when used on large networks with many hidden layers. Towards this end, this work proposes 2 theoretically sound and empirically proven techniques to further bridge this gap when the network is large or when the current SDP approach is unsatisfactory. Both techniques can achieve exact certification in the asymptotic regime with exponential complexity. Moreover we show that even with polynomial complexity both techniques can greatly reduce the relaxation gap when compared to the original method.

The first technique is partitioning the input uncertainty set and solving the convex relaxations on each part separately. We show that this approach guarantees tightened relaxation error regardless of the given neural network. We use the feasible set geometry to show that partitions should be uniform along the coordinate axes of the input space. We then determine which coordinate best reduces the worst-case SDP relaxation error for the case of a two-part partition. The proposed methods are illustrated experimentally on IRIS classification networks and are shown to significantly reduce relaxation error, even offering certificates that are otherwise void without partitioning.

The second technique we introduce to reduce the relaxation gap is to add linear constraints to the SDP program that best approximates non-convex cuts via the use of disjunctive programming. The proposed method amounts to a sequential SDP technique. We analyze the performance of this method both theoretically and empirically, and show that it bridges the gap as the number of cuts increases.

Advisors: Somayeh Sojoudi


BibTeX citation:

@mastersthesis{Ma:EECS-2020-198,
    Author= {Ma, Ziye},
    Title= {Tightening the SDP Relaxation Gap in Neural Network Verification},
    School= {EECS Department, University of California, Berkeley},
    Year= {2020},
    Month= {Dec},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-198.html},
    Number= {UCB/EECS-2020-198},
    Abstract= {In this thesis report, we consider certifying the robustness of pre-trained ReLU neural networks against adversarial input perturbations. To be more specific, we only consider the semidefinite programming (SDP) approach of certification in which the original non-convex certification problem is relaxed as a SDP programming.

The SDP approach enjoys good certification percentage because the relaxation gap that it poses is naturally smaller than that of other methods, including the Linear Programming (LP) based approaches and the Mixed Integer Programming (MIP) based approaches. However, even SDP approaches suffer from large relaxation gaps when used on large networks with many hidden layers. Towards this end, this work proposes 2 theoretically sound and empirically proven techniques to further bridge this gap when the network is large or when the current SDP approach is unsatisfactory. Both techniques can achieve exact certification in the asymptotic regime with exponential complexity. Moreover we show that even with polynomial complexity both techniques can greatly reduce the relaxation gap when compared to the original method.

The first technique is partitioning the input uncertainty set and solving the convex relaxations on each part separately. We show that this approach guarantees tightened relaxation error regardless of the given neural network. We use the feasible set geometry to show that partitions should be uniform along the coordinate axes of the input space. We then determine which coordinate best reduces the worst-case SDP relaxation error for the case of a two-part partition. The proposed methods are illustrated experimentally on IRIS classification networks and are shown to significantly reduce relaxation error, even offering certificates that are otherwise void without partitioning. 

The second technique we introduce to reduce the relaxation gap is to add linear constraints to the SDP program that best approximates non-convex cuts via the use of disjunctive programming. The proposed method amounts to a sequential SDP technique. We analyze the performance of this method both theoretically and empirically, and show that it bridges the gap as the number of cuts increases.},
}

EndNote citation:

%0 Thesis
%A Ma, Ziye 
%T Tightening the SDP Relaxation Gap in Neural Network Verification
%I EECS Department, University of California, Berkeley
%D 2020
%8 December 2
%@ UCB/EECS-2020-198
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-198.html
%F Ma:EECS-2020-198