Pre-training Agents for Design Optimization and Control

Kourosh Hakhamaneshi

EECS Department
University of California, Berkeley
Technical Report No. UCB/EECS-2022-35
May 2, 2022

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2022/EECS-2022-35.pdf

In recent years, we have seen tremendous benefits from pre-training neural networks to learn representations that are transferable to unseen downstream tasks in both vision and NLP. However, this paradigm of learning has not been much explored for decision making such as design optimization or control. In this thesis, we outline two problem settings that could benefit from pre-training in the context of decision making. First, we describe a setting for automated design optimization, in particular circuit design optimization, where prior domain-specific data can be used to effectively improve the sample efficiency of model-based optimization methods. This thesis presents novel ideas along with empirical and theoretical analysis on how to boost sample efficiency of model-based evolutionary algorithms as well as Bayesian optimization methods. In the second problem setting, we will discuss how we can leverage unsupervised pre-training from large task-agnostic datasets to extract behavioral representations and do few-shot imitation learning. We find that pre-training agents to extract skills is a practical direction for preparing them for few-shot imitation when example demonstrations from the new task are scarce.

Advisor: Pieter Abbeel and Vladimir Stojanovic


BibTeX citation:

@phdthesis{Hakhamaneshi:EECS-2022-35,
    Author = {Hakhamaneshi, Kourosh},
    Title = {Pre-training Agents for Design Optimization and Control},
    School = {EECS Department, University of California, Berkeley},
    Year = {2022},
    Month = {May},
    URL = {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2022/EECS-2022-35.html},
    Number = {UCB/EECS-2022-35},
    Abstract = {In recent years, we have seen tremendous benefits from pre-training neural networks to learn representations that are transferable to unseen downstream tasks in both vision and NLP. However, this paradigm of learning has not been much explored for decision making such as design optimization or control. In this thesis, we outline two problem settings that could benefit from pre-training in the context of decision making. First, we describe a setting for automated design optimization, in particular circuit design optimization, where prior domain-specific data can be used to effectively improve the sample efficiency of model-based optimization methods. 
This thesis presents novel ideas along with empirical and theoretical analysis on how to boost sample efficiency of model-based evolutionary algorithms as well as Bayesian optimization methods. In the second problem setting, we will discuss how we can leverage unsupervised pre-training from large task-agnostic datasets to extract behavioral representations and do few-shot imitation learning. We find that pre-training agents to extract skills is a practical direction for preparing them for few-shot imitation when example demonstrations from the new task are scarce.}
}

EndNote citation:

%0 Thesis
%A Hakhamaneshi, Kourosh
%T Pre-training Agents for Design Optimization and Control
%I EECS Department, University of California, Berkeley
%D 2022
%8 May 2
%@ UCB/EECS-2022-35
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2022/EECS-2022-35.html
%F Hakhamaneshi:EECS-2022-35