Michael McCoyd

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2020-170

August 14, 2020

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-170.pdf

Machine learning is increasingly used to make sense of our world in areas from spam detection, recommendation systems, to image classification. However, in each, it is vulnerable to adversarial manipulation. Within adversarial machine learning, we examine image classification attacks and defenses. We construct spoofs of face detection, and we create defenses against two attacks on image classification: normal and patch adversarial example attacks.

We examine the Viola-Jones 2D face detection algorithm to study whether images can be created that humans do not notice as faces, yet the algorithm detects as faces. We show that it is possible to construct images that Viola-Jones recognizes as containing faces, yet no human would consider a face. Moreover, we show that it is possible to construct images that fool facial detection even after the images are printed and then photographed.

Adversarial examples allow crafted attacks against deep neural network classification of images. The attack changes the computer classification of an image without changing how humans classify it.

We propose a defense of expanding the training set with a single, large, and diverse class of background images, striving to ‘fill’ around the borders of the classification boundary. We find that our defense aids the detection of simple attacks on EMNIST, but not advanced attacks. We discuss several limitations of our examination.

An attacker limited to changing just a small patch of an image can still deceive deep learning image classification. We propose a defense against such patch attacks based on multiple partial occlusions of the image such that a few occlusions each completely hide the patch. We provide certified accuracy for CIFAR-10, Fashion MNIST and MNIST, with a tunable tradeoff between the false-positive rate and certified accuracy. For CIFAR-10 and a 5 × 5 patch, we can provide certified accuracy for 43.8% of images, at the cost of only 1.6% in clean image accuracy compared to the architecture we defend or a cost of 0.1% compared to our training of that architecture, including a 0.2% false-positive rate.

Advisors: David A. Wagner


BibTeX citation:

@phdthesis{McCoyd:EECS-2020-170,
    Author= {McCoyd, Michael},
    Title= {Background and Occlusion Defenses Against Adversarial Examples and Adversarial Patches},
    School= {EECS Department, University of California, Berkeley},
    Year= {2020},
    Month= {Aug},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-170.html},
    Number= {UCB/EECS-2020-170},
    Abstract= {Machine learning is increasingly used to make sense of our world in areas from spam detection, recommendation systems, to image classification. However, in each, it is vulnerable to adversarial manipulation. Within adversarial machine learning, we examine image classification attacks and defenses. We construct spoofs of face detection, and we create defenses against two attacks on image classification: normal and patch adversarial example attacks.

We examine the Viola-Jones 2D face detection algorithm to study whether images can be created that humans do not notice as faces, yet the algorithm detects as faces. We show that it is possible to construct images that Viola-Jones recognizes as containing faces, yet no human would consider a face. Moreover, we show that it is possible to construct images that fool facial detection even after the images are printed and then photographed.

Adversarial examples allow crafted attacks against deep neural network classification of images. The attack changes the computer classification of an image without changing how humans classify it.

We propose a defense of expanding the training set with a single, large, and diverse class of background images, striving to ‘fill’ around the borders of the classification boundary. We find that our defense aids the detection of simple attacks on EMNIST, but not advanced attacks. We discuss several limitations of our examination.

An attacker limited to changing just a small patch of an image can still deceive deep learning image classification. We propose a defense against such patch attacks based on multiple partial occlusions of the image such that a few occlusions each completely hide the patch. We provide certified accuracy for CIFAR-10, Fashion MNIST and MNIST, with a tunable tradeoff between the false-positive rate and certified accuracy. For CIFAR-10 and a 5 × 5 patch, we can provide certified accuracy for 43.8% of images, at the cost of only 1.6% in clean image accuracy compared to the architecture we defend or a cost of 0.1% compared to our training of that architecture, including a 0.2% false-positive rate.},
}

EndNote citation:

%0 Thesis
%A McCoyd, Michael 
%T Background and Occlusion Defenses Against Adversarial Examples and Adversarial Patches
%I EECS Department, University of California, Berkeley
%D 2020
%8 August 14
%@ UCB/EECS-2020-170
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-170.html
%F McCoyd:EECS-2020-170