Nathan Lichtle

EECS Department, University of California, Berkeley

Technical Report No. UCB/

May 1, 2025

Highway traffic instabilities, commonly manifested as stop-and-go waves or capacity drop at bottlenecks, represent a major source of energy waste and congestion in transportation systems. Even small fractions of autonomous vehicles (AVs) have the potential to dampen these phenomena and improve overall traffic flow. However, developing control methods that can transition from simulation to deployment in mixed-autonomy environments remains a fundamental challenge. The core problem lies in creating robust control algorithms that can handle the scale and complexity of multi-agent traffic interactions while maintaining safety and performance when deployed on actual roads with human drivers.

This work establishes a complete research pipeline from simulation-based algorithm development to large-scale field validation of deep reinforcement learning (RL) methods for AV traffic control. The approach begins with developing multi-agent RL techniques for decentralized bottleneck control. It introduces Nocturne, a high-throughput driving simulator built on trajectory data to enable scalable multi-agent learning. Wave-smoothing cruise controllers are then trained directly on highway trajectory data, leading to a validated deployment pipeline that transfers learned policies to production vehicles without retuning. The methodology culminates in a large-scale field experiment where RL controllers developed in this work operate 100 connected and automated vehicles deployed in live rush hour traffic on the I-24 highway in Tennessee, achieving measurable traffic smoothing through distributed control. The work concludes by introducing neural network-based methods for traffic flow prediction via partial differential equations. Together, these contributions demonstrate both the feasibility and effectiveness of learning-based traffic control in mixed-autonomy environments, offering a clear path toward substantial improvements in highway energy efficiency and traffic congestion reduction.

Advisors: Alexandre Bayen


BibTeX citation:

@phdthesis{Lichtle:31905,
    Author= {Lichtle, Nathan},
    Title= {Deep Reinforcement Learning for Autonomous Vehicle Traffic Control and Stabilization: From Simulation to a 100-Vehicle Highway Field Deployment},
    School= {EECS Department, University of California, Berkeley},
    Year= {2025},
    Number= {UCB/},
    Abstract= {Highway traffic instabilities, commonly manifested as stop-and-go waves or capacity drop at bottlenecks, represent a major source of energy waste and congestion in transportation systems. Even small fractions of autonomous vehicles (AVs) have the potential to dampen these phenomena and improve overall traffic flow. However, developing control methods that can transition from simulation to deployment in mixed-autonomy environments remains a fundamental challenge. The core problem lies in creating robust control algorithms that can handle the scale and complexity of multi-agent traffic interactions while maintaining safety and performance when deployed on actual roads with human drivers.

This work establishes a complete research pipeline from simulation-based algorithm development to large-scale field validation of deep reinforcement learning (RL) methods for AV traffic control. The approach begins with developing multi-agent RL techniques for decentralized bottleneck control. It introduces Nocturne, a high-throughput driving simulator built on trajectory data to enable scalable multi-agent learning. Wave-smoothing cruise controllers are then trained directly on highway trajectory data, leading to a validated deployment pipeline that transfers learned policies to production vehicles without retuning. The methodology culminates in a large-scale field experiment where RL controllers developed in this work operate 100 connected and automated vehicles deployed in live rush hour traffic on the I-24 highway in Tennessee, achieving measurable traffic smoothing through distributed control. The work concludes by introducing neural network-based methods for traffic flow prediction via partial differential equations. Together, these contributions demonstrate both the feasibility and effectiveness of learning-based traffic control in mixed-autonomy environments, offering a clear path toward substantial improvements in highway energy efficiency and traffic congestion reduction.},
}

EndNote citation:

%0 Thesis
%A Lichtle, Nathan 
%T Deep Reinforcement Learning for Autonomous Vehicle Traffic Control and Stabilization: From Simulation to a 100-Vehicle Highway Field Deployment
%I EECS Department, University of California, Berkeley
%D 2025
%8 May 1
%@ UCB/
%F Lichtle:31905