Austin Jang

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2023-101

May 11, 2023

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-101.pdf

Reinforcement learning (RL) is a powerful tool for optimal control that has found great success in Atari games, the game of Go, robotic control, and building optimization. RL is also very brittle; RL agents often overfit to their training environment and fail to generalize to new settings. Unsupervised environment design (UED) has been proposed as a solution to this problem, in which the agent trains in environments that have been specially selected to help it learn. However, previous UED algorithms focus on trying to train an RL agent that generalizes across a large distribution of environments. This is not desirable when we wish to prioritize performance in one environment over others. For example, we will be examining the setting of robust RL building control, where we wish to train an RL agent that prioritizes performing well in normal weather while still being robust to extreme weather conditions. We demonstrate a novel UED algorithm, ActiveRL, that uses uncertainty-aware neural network architectures to generate new training environments at the edge of the RL agent's ability while being able to prioritize performance in a desired base environment. We show that ActiveRL is able to outperform state-of-the-art UED algorithms in minimizing energy usage while maximizing occupant comfort in the setting of building control.

Advisors: Costas J. Spanos


BibTeX citation:

@mastersthesis{Jang:EECS-2023-101,
    Author= {Jang, Austin},
    Title= {Active Reinforcement Learning for Robust Building Control},
    School= {EECS Department, University of California, Berkeley},
    Year= {2023},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-101.html},
    Number= {UCB/EECS-2023-101},
    Abstract= {Reinforcement learning (RL) is a powerful tool for optimal control that has found great success in Atari games, the game of Go, robotic control, and building optimization. RL is also very brittle; RL agents often overfit to their training environment and fail to generalize to new settings. Unsupervised environment design (UED) has been proposed as a solution to this problem, in which the agent trains in environments that have been specially selected to help it learn. However, previous UED algorithms focus on trying to train an RL agent that generalizes across a large distribution of environments. This is not desirable when we wish to prioritize performance in one environment over others. For example,  we will be examining the setting of robust RL building control, where we wish to train an RL agent that prioritizes performing well in normal weather while still being robust to extreme weather conditions. We demonstrate a novel UED algorithm, ActiveRL, that uses uncertainty-aware neural network architectures to generate new training environments at the edge of the RL agent's ability while being able to prioritize performance in a desired base environment. We show that ActiveRL is able to outperform state-of-the-art UED algorithms in minimizing energy usage while maximizing occupant comfort in the setting of building control.},
}

EndNote citation:

%0 Thesis
%A Jang, Austin 
%T Active Reinforcement Learning for Robust Building Control
%I EECS Department, University of California, Berkeley
%D 2023
%8 May 11
%@ UCB/EECS-2023-101
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-101.html
%F Jang:EECS-2023-101