Nicholas Carlini

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2018-118

August 10, 2018

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2018/EECS-2018-118.pdf

Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to test-time evasion attacks (i.e., adversarial examples): inputs specifically designed by an adversary to cause a neural network to misclassify them. This makes applying neural networks in security-critical areas concerning.

In this dissertation, we introduce a general framework for evaluating the robustness of neural network through optimization-based methods. We apply our framework to two different domains, image recognition and automatic speech recognition, and find it provides state-of-the-art results for both. To further demonstrate the power of our methods, we apply our attacks to break 14 defenses that have been proposed to alleviate adversarial examples.

We then turn to the problem of designing a secure classifier. Given this apparently-fundamental vulnerability of neural networks to adversarial examples, instead of taking an existing classifier and attempting to make it robust, we construct a new classifier which is provably robust by design under a restricted threat model. We consider the domain of malware classification, and construct a neural network classifier that is can not be fooled by an insertion adversary, who can only insert new functionality, and not change existing functionality.

We hope this dissertation will provide a useful starting point for both evaluating and constructing neural networks robust in the presence of an adversary.

Advisors: David Wagner


BibTeX citation:

@phdthesis{Carlini:EECS-2018-118,
    Author= {Carlini, Nicholas},
    Title= {Evaluation and Design of Robust Neural Network Defenses},
    School= {EECS Department, University of California, Berkeley},
    Year= {2018},
    Month= {Aug},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2018/EECS-2018-118.html},
    Number= {UCB/EECS-2018-118},
    Abstract= {Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to test-time evasion attacks (i.e., adversarial examples): inputs specifically designed by an adversary to cause a neural network to misclassify them. This makes applying neural networks in security-critical areas concerning.

In this dissertation, we introduce a general framework for evaluating the robustness of neural network through optimization-based methods. We apply our framework to two different domains, image recognition and automatic speech recognition, and find it provides state-of-the-art results for both. To further demonstrate the power of our methods, we apply our attacks to break 14 defenses that have been proposed to alleviate adversarial examples.

We then turn to the problem of designing a secure classifier. Given this apparently-fundamental vulnerability of neural networks to adversarial examples, instead of taking an existing classifier and attempting to make it robust, we construct a new classifier which is provably robust by design under a restricted threat model. We consider the domain of malware classification, and construct a neural network classifier that is can not be fooled by an insertion adversary, who can only insert new functionality, and not change existing functionality.

We hope this dissertation will provide a useful starting point for both evaluating and constructing neural networks robust in the presence of an adversary.},
}

EndNote citation:

%0 Thesis
%A Carlini, Nicholas 
%T Evaluation and Design of Robust Neural Network Defenses
%I EECS Department, University of California, Berkeley
%D 2018
%8 August 10
%@ UCB/EECS-2018-118
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2018/EECS-2018-118.html
%F Carlini:EECS-2018-118