Building Agentic Systems in an Era of Large Language Models

Charles Packer

EECS Department
University of California, Berkeley
Technical Report No. UCB/EECS-2024-223
December 19, 2024

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-223.pdf

Building intelligent autonomous systems that can reason, adapt, and interact with their environment has been a long-standing goal in artificial intelligence. This thesis explores the evolution of agentic systems through the deep learning revolution, from reinforcement learning to modern Large Language Models (LLMs), focusing on the critical components needed to create reliable autonomous agents. First, we address the fundamental challenge of generalization in deep reinforcement learn- ing (RL), introducing a systematic framework for evaluating and improving how learned poli- cies transfer across environments. Building on this foundation, we present Hindsight Task Relabeling (HTR), a novel approach that enables meta-RL algorithms to learn adaptation strategies in sparse reward settings without requiring dense reward signals during training. Finally, we address the emerging challenges of building reliable agents using Large Lan- guage Models. While LLMs demonstrate unprecedented reasoning capabilities, their effec- tiveness as autonomous agents is limited by fundamental constraints in their architecture - most notably, their stateless nature and fixed context windows. We present MemGPT, an operating system-inspired framework that enables LLMs to manage their own memory and state, introducing concepts like virtual context management and self-directed memory opera- tions. MemGPT demonstrates that by treating LLMs as a new fundamental unit of compute - analogous to how CPUs were the fundamental unit in traditional operating systems - we can build more reliable and capable autonomous agents. Together, these systems trace the evolution of agentic AI systems and provide key build- ing blocks for creating more reliable and capable autonomous agents. By addressing core challenges in generalization, adaptation, and memory management, this thesis establishes a foundation for engineering the next generation of AI systems that can effectively reason and interact with the world.

Advisor: Joseph Gonzalez

\"Edit"; ?>


BibTeX citation:

@phdthesis{Packer:EECS-2024-223,
    Author = {Packer, Charles},
    Title = {Building Agentic Systems in an Era of Large Language Models},
    School = {EECS Department, University of California, Berkeley},
    Year = {2024},
    Month = {Dec},
    URL = {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-223.html},
    Number = {UCB/EECS-2024-223},
    Abstract = {Building intelligent autonomous systems that can reason, adapt, and interact with their
environment has been a long-standing goal in artificial intelligence. This thesis explores
the evolution of agentic systems through the deep learning revolution, from reinforcement
learning to modern Large Language Models (LLMs), focusing on the critical components
needed to create reliable autonomous agents.
First, we address the fundamental challenge of generalization in deep reinforcement learn-
ing (RL), introducing a systematic framework for evaluating and improving how learned poli-
cies transfer across environments. Building on this foundation, we present Hindsight Task
Relabeling (HTR), a novel approach that enables meta-RL algorithms to learn adaptation
strategies in sparse reward settings without requiring dense reward signals during training.
Finally, we address the emerging challenges of building reliable agents using Large Lan-
guage Models. While LLMs demonstrate unprecedented reasoning capabilities, their effec-
tiveness as autonomous agents is limited by fundamental constraints in their architecture -
most notably, their stateless nature and fixed context windows. We present MemGPT, an
operating system-inspired framework that enables LLMs to manage their own memory and
state, introducing concepts like virtual context management and self-directed memory opera-
tions. MemGPT demonstrates that by treating LLMs as a new fundamental unit of compute
- analogous to how CPUs were the fundamental unit in traditional operating systems - we
can build more reliable and capable autonomous agents.
Together, these systems trace the evolution of agentic AI systems and provide key build-
ing blocks for creating more reliable and capable autonomous agents. By addressing core
challenges in generalization, adaptation, and memory management, this thesis establishes a
foundation for engineering the next generation of AI systems that can effectively reason and
interact with the world.}
}

EndNote citation:

%0 Thesis
%A Packer, Charles
%T Building Agentic Systems in an Era of Large Language Models
%I EECS Department, University of California, Berkeley
%D 2024
%8 December 19
%@ UCB/EECS-2024-223
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-223.html
%F Packer:EECS-2024-223