Vicenc Rubies Royo

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2020-184

November 19, 2020

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-184.pdf

Autonomous systems are becoming ever more complex. This growth in complexity stems primarily from continual improvements in computational power, which have enabled, among many things, the use of more sophisticated high-dimensional dynamical models or the use of deep neural networks for perception and decision-making. Unfortunately, this increase in complexity is coupled with an increase in uncertainty on how these systems might behave in safety-critical settings where guarantees of performance are needed.

In this dissertation, we will first address the challenges involved in the computation of safety certificates for high-dimensional safety-critical systems and how machine learning, and in particular artificial neural networks, can provide scalable approximate solutions which work well in practice. However, reliance on neural networks for autonomy poses itself a challenge, since these function approximators can sometimes produce erroneous behaviors when exposed to noise or adversarial attacks, for example. With this in mind, in the second half of the dissertation we will address the challenges involved in the verification of neural networks, and in particular, how to assess whether deep feedforward neural networks adhere to safety specifications.

Advisors: Claire Tomlin


BibTeX citation:

@phdthesis{Rubies Royo:EECS-2020-184,
    Author= {Rubies Royo, Vicenc},
    Title= {Assured Autonomy for Safety-Critical and Learning-Enabled Systems},
    School= {EECS Department, University of California, Berkeley},
    Year= {2020},
    Month= {Nov},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-184.html},
    Number= {UCB/EECS-2020-184},
    Abstract= {Autonomous systems are becoming ever more complex. This growth in complexity stems primarily from continual improvements in computational power, which have enabled, among many things, the use of more sophisticated high-dimensional dynamical models or the use of deep neural networks for perception and decision-making. Unfortunately, this increase in complexity is coupled with an increase in uncertainty on how these systems might behave in safety-critical settings where guarantees of performance are needed.

In this dissertation, we will first address the challenges involved in the computation of safety certificates for high-dimensional safety-critical systems and how machine learning, and in particular artificial neural networks, can provide scalable approximate solutions which work well in practice. However, reliance on neural networks for autonomy poses itself a challenge, since these function approximators can sometimes produce erroneous behaviors when exposed to noise or adversarial attacks, for example. With this in mind, in the second half of the dissertation we will address the challenges involved in the verification of neural networks, and in particular, how to assess whether deep feedforward neural networks adhere to safety specifications.},
}

EndNote citation:

%0 Thesis
%A Rubies Royo, Vicenc 
%T Assured Autonomy for Safety-Critical and Learning-Enabled Systems
%I EECS Department, University of California, Berkeley
%D 2020
%8 November 19
%@ UCB/EECS-2020-184
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-184.html
%F Rubies Royo:EECS-2020-184