Nathan Lichtle

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2024-66

May 9, 2024

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-66.pdf

In this work, we optimize fuel consumption in a large, calibrated traffic model of a portion of the Ventury Freeway (Interstate 210, near Los Angeles, California) by leveraging a low proportion of autonomous vehicles controlled by reinforcement learning algorithms. We specifically target stop-and-go waves, a phenomenon characterized by alternating acceleration and braking, which is widespread on real-world highways and significantly detrimental to fuel efficiency. In order to simulate these dynamics accurately, we introduce waves into the network using a string-unstable car-following model, as well as a ghost cell to enable wave propagation beyond the network boundary. Using multi-agent reinforcement learning, we develop a decentralized controller that effectively mitigates instabilities and partially dampens these waves, resulting in a significant 25% reduction in fuel consumption with only a 10% penetration rate of autonomous vehicles. We then investigate the designed controller’s robustness by testing it under various conditions. Our results show that it maintains equilibrium speeds across a wide range of wave speeds and penetration rates far outside of the training regime, demonstrating its generalization and robustness.

Advisors: Alexandre Bayen


BibTeX citation:

@mastersthesis{Lichtle:EECS-2024-66,
    Author= {Lichtle, Nathan},
    Title= {Deep Reinforcement Learning for Autonomous Vehicles: Improving Traffic Flow in Mixed-Autonomy},
    School= {EECS Department, University of California, Berkeley},
    Year= {2024},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-66.html},
    Number= {UCB/EECS-2024-66},
    Abstract= {In this work, we optimize fuel consumption in a large, calibrated traffic model of a portion of the Ventury Freeway (Interstate 210, near Los Angeles, California) by leveraging a low proportion of autonomous vehicles controlled by reinforcement learning algorithms. We specifically target stop-and-go waves, a phenomenon characterized by alternating acceleration and braking, which is widespread on real-world highways and significantly detrimental to fuel efficiency. In order to simulate these dynamics accurately, we introduce waves into the network using a string-unstable car-following model, as well as a ghost cell to enable wave propagation beyond the network boundary. Using multi-agent reinforcement learning, we develop a decentralized controller that effectively mitigates instabilities and partially dampens these waves, resulting in a significant 25% reduction in fuel consumption with only a 10% penetration rate of autonomous vehicles. We then investigate the designed controller’s robustness by testing it under various conditions. Our results show that it maintains equilibrium speeds across a wide range of wave speeds and penetration rates far outside of the training regime, demonstrating its generalization and robustness.},
}

EndNote citation:

%0 Thesis
%A Lichtle, Nathan 
%T Deep Reinforcement Learning for Autonomous Vehicles: Improving Traffic Flow in Mixed-Autonomy
%I EECS Department, University of California, Berkeley
%D 2024
%8 May 9
%@ UCB/EECS-2024-66
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-66.html
%F Lichtle:EECS-2024-66