Adaptive and Diverse Techniques for Generating Adversarial Examples

Warren He

EECS Department
University of California, Berkeley
Technical Report No. UCB/EECS-2018-175
December 14, 2018

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2018/EECS-2018-175.pdf

Deep neural networks (DNNs) have rapidly advanced the state of the art in many important, difficult problems. However, recent research has shown that they are vulnerable to adversarial examples. Small worst-case perturbations to a DNN model's input can cause it to be processed incorrectly. Subsequent work has proposed a variety of ways to defend DNN models from adversarial examples, but many defenses are not adequately evaluated on general adversaries.

In this dissertation, we present techniques for generating adversarial examples in order to evaluate defenses under a threat model with an adaptive adversary, with a focus on the task of image classification. We demonstrate our techniques on four proposed defenses and identify new limitations in them.

Next, in order to assess the generality of a promising class of defenses based on adversarial training, we exercise defenses on a diverse set of points near benign examples, other than adversarial examples generated by well known attack methods. First, we analyze a neighborhood of examples in a large sample of directions. Second, we experiment with three new attack methods that differ from previous additive gradient based methods in important ways. We find that these defenses are less robust to these new attacks.

Overall, our results show that current defenses perform better on existing well known attacks, which suggests that we have yet to see a defense that can stand up to a general adversary. We hope that this work sheds light for future work on more general defenses.

Advisor: Dawn Song


BibTeX citation:

@phdthesis{He:EECS-2018-175,
    Author = {He, Warren},
    Title = {Adaptive and Diverse Techniques for Generating Adversarial Examples},
    School = {EECS Department, University of California, Berkeley},
    Year = {2018},
    Month = {Dec},
    URL = {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2018/EECS-2018-175.html},
    Number = {UCB/EECS-2018-175},
    Abstract = {Deep neural networks (DNNs) have rapidly advanced the state of the art in many important, difficult problems. However, recent research has shown that they are vulnerable to adversarial examples. Small worst-case perturbations to a DNN model's input can cause it to be processed incorrectly. Subsequent work has proposed a variety of ways to defend DNN models from adversarial examples, but many defenses are not adequately evaluated on general adversaries.

In this dissertation, we present techniques for generating adversarial examples in order to evaluate defenses under a threat model with an adaptive adversary, with a focus on the task of image classification. We demonstrate our techniques on four proposed defenses and identify new limitations in them.

Next, in order to assess the generality of a promising class of defenses based on adversarial training, we exercise defenses on a diverse set of points near benign examples, other than adversarial examples generated by well known attack methods. First, we analyze a neighborhood of examples in a large sample of directions. Second, we experiment with three new attack methods that differ from previous additive gradient based methods in important ways. We find that these defenses are less robust to these new attacks.

Overall, our results show that current defenses perform better on existing well known attacks, which suggests that we have yet to see a defense that can stand up to a general adversary. We hope that this work sheds light for future work on more general defenses.}
}

EndNote citation:

%0 Thesis
%A He, Warren
%T Adaptive and Diverse Techniques for Generating Adversarial Examples
%I EECS Department, University of California, Berkeley
%D 2018
%8 December 14
%@ UCB/EECS-2018-175
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2018/EECS-2018-175.html
%F He:EECS-2018-175