Model-Agnostic Defense for Lane Detection Against Adversarial Attack

Henry Xu, An Ju and David A. Wagner

EECS Department
University of California, Berkeley
Technical Report No. UCB/EECS-2021-105
May 14, 2021

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-105.pdf

Susceptibility of neural networks to adversarial attack prompts serious safety concerns for lane detection efforts, a domain where such models have been widely applied. Recent work on adversarial road patches have successfully induced perception of lane lines with arbitrary form, presenting an avenue for rogue control of vehicle behavior. In this paper, we propose a modular lane verification system that can catch such threats before the autonomous driving system is misled while remaining agnostic to the particular lane detection model. Our experiments show that implementing the system with simple convolutional neural networks (CNN) can defend against a wide gamut of attacks on lane detection models. We can detect 96% of nonadaptive bounded attacks, 90% of adaptive bounded attacks, and 90% of adaptive patch attacks while preserving accurate identification at least 95% of true lanes using a 3-layer architecture imposing at most a 10% impact to inference time. Using ResNet-18 as a backbone, we can detect 99% of bounded non-adaptive attacks and 98% of bounded adaptive attacks, indicating that our proposed verification system is effective at mitigating lane detection security risks.

Advisor: David A. Wagner


BibTeX citation:

@mastersthesis{Xu:EECS-2021-105,
    Author = {Xu, Henry and Ju, An and Wagner, David A.},
    Title = {Model-Agnostic Defense for Lane Detection Against Adversarial Attack},
    School = {EECS Department, University of California, Berkeley},
    Year = {2021},
    Month = {May},
    URL = {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-105.html},
    Number = {UCB/EECS-2021-105},
    Abstract = {Susceptibility of neural networks to adversarial attack prompts serious safety concerns for lane detection efforts, a domain where such models have been widely applied. Recent work on adversarial road patches have successfully induced perception of lane lines with arbitrary form, presenting an avenue for rogue control of vehicle behavior. In this paper, we propose a modular lane verification system that can catch such threats before the autonomous driving system is misled while remaining agnostic to the particular lane detection model. Our experiments show that implementing the system with simple convolutional neural networks (CNN) can defend against a wide gamut of attacks on lane detection models.
We can detect 96% of nonadaptive bounded attacks, 90% of adaptive bounded attacks, and 90% of adaptive patch attacks while preserving accurate identification at least 95% of true lanes using a 3-layer architecture imposing at most a 10% impact to inference time. Using ResNet-18 as a backbone, we can detect 99% of bounded non-adaptive attacks and 98% of bounded adaptive attacks, indicating that our proposed verification system is effective at mitigating lane detection security risks.}
}

EndNote citation:

%0 Thesis
%A Xu, Henry
%A Ju, An
%A Wagner, David A.
%T Model-Agnostic Defense for Lane Detection Against Adversarial Attack
%I EECS Department, University of California, Berkeley
%D 2021
%8 May 14
%@ UCB/EECS-2021-105
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-105.html
%F Xu:EECS-2021-105