Sam Toyer

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2024-123

May 17, 2024

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-123.pdf

This dissertation considers how to evaluate and improve the robustness of AI systems in situations that are systematically different from those encountered during training. Specifically, we consider test-time robustness for two particular ways of specifying tasks, and two specific forms of generalization. The first part of this dissertation focuses on learning tasks from demonstrations with imitation, while the second focuses on specifying tasks for large language models using natural language instructions.

In the first part, we specifically consider the combinatorial and in-distribution generalization of imitation learning. Our first contribution is a benchmark for how well-learned policies can generalize along various axes. The benchmark allows us to manipulate these axes independently to determine invariances and equivariances the policy has. Using this benchmark, we show that some basic computer vision techniques (augmentation, egocentric views) improve imitative generalization, but more sophisticated representation learning techniques do not.

In the second part, we consider instruction-following language models and adversarial robustness, where a user is actively trying to provoke errors from the model. Here we contribute a large dataset of prompt injection attacks obtained from an online game, which we distill into a benchmark for language model robustness. We also consider a second type of adversarial attack called a jailbreak and show that existing evaluations are insufficient to gauge the actual misuse potential of jailbreaking techniques. Thus we propose a new benchmark that identifies effective jailbreaks while correctly disregarding ineffective ones.

This dissertation proposes several evaluations for challenging problems where existing algorithms fail: imitation learning algorithms struggle to generalize when only a few demonstrations are available, and representation learning is not an easy fix. Likewise, the safeguards around large language models are easy for an adversary to subvert. These negative results point toward ways that AI systems could be improved to be more robust in unexpected circumstances; we describe these opportunities for future work in Chapter 6.

Advisors: Stuart J. Russell


BibTeX citation:

@phdthesis{Toyer:EECS-2024-123,
    Author= {Toyer, Sam},
    Title= {Robust Task Specification for Learning Systems},
    School= {EECS Department, University of California, Berkeley},
    Year= {2024},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-123.html},
    Number= {UCB/EECS-2024-123},
    Abstract= {This dissertation considers how to evaluate and improve the robustness of AI systems in situations that are systematically different from those encountered during training. Specifically, we consider test-time robustness for two particular ways of specifying tasks, and two specific forms of generalization. The first part of this dissertation focuses on learning tasks from demonstrations with imitation, while the second focuses on specifying tasks for large language models using natural language instructions.

In the first part, we specifically consider the combinatorial and in-distribution generalization of imitation learning. Our first contribution is a benchmark for how well-learned policies can generalize along various axes. The benchmark allows us to manipulate these axes independently to determine invariances and equivariances the policy has. Using this benchmark, we show that some basic computer vision techniques (augmentation, egocentric views) improve imitative generalization, but more sophisticated representation learning techniques do not.

In the second part, we consider instruction-following language models and adversarial robustness, where a user is actively trying to provoke errors from the model. Here we contribute a large dataset of prompt injection attacks obtained from an online game, which we distill into a benchmark for language model robustness. We also consider a second type of adversarial attack called a jailbreak and show that existing evaluations are insufficient to gauge the actual misuse potential of jailbreaking techniques. Thus we propose a new benchmark that identifies effective jailbreaks while correctly disregarding ineffective ones.

This dissertation proposes several evaluations for challenging problems where existing algorithms fail: imitation learning algorithms struggle to generalize when only a few demonstrations are available, and representation learning is not an easy fix. Likewise, the safeguards around large language models are easy for an adversary to subvert. These negative results point toward ways that AI systems could be improved to be more robust in unexpected circumstances; we describe these opportunities for future work in Chapter 6.},
}

EndNote citation:

%0 Thesis
%A Toyer, Sam 
%T Robust Task Specification for Learning Systems
%I EECS Department, University of California, Berkeley
%D 2024
%8 May 17
%@ UCB/EECS-2024-123
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-123.html
%F Toyer:EECS-2024-123