Tianlun Zhang

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2023-187

May 22, 2023

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-187.pdf

This report presents an exploration of advanced perception stack for Indy Autonomous Racing (IAC) and reinforcement learning for autonomous racing in the simulation. The first part of the report investigates an efficient perception system that uses the inputs from a variety of sensors, namely cameras, RADAR, LiDAR. The objective of this system is to robustly detect and track other race cars during the race. We have successfully implemented a pure RADAR detection pipeline for long-range detection and a LiDAR-Camera detection pipeline for short-range detection. Both pipelines work properly and efficiently to serve the requirements of perception for our race cars. Merging the detections from both and feeding it into the tracker gives us a stable and promising perception output for the race. We won second place in the Texas Motor Speedway Race and third place in the Las Vegas Motor Speedway Race. The second part of the paper investigates the implementation and improvement of a Reinforcement Learning (RL) agent for autonomous racing in the simulation. Continuing from the successful RL agent we designed and implemented last year, we furthermore challenge the RL agent to achieve more stable and safer driving in the simulation environment. We redesigned the observation space and action space for the RL environment to reduce the model complexity and provide as much useful information to our agent as possible, hoping to help the agent understand the map better and make the model converge faster. Inspired by the breakthroughs in the Atari game by Agent57, we have adapted and enhanced our network structure and replaced the optimization policy to Soft Actor Critic (SAC), hoping the more robust network can better deal with the extracted features and result in a safe RL driving solution. The training is still underway, but the current results demonstrate that the new RL agent has the ability to learn to speed up on the straight and slow down upon turning. Future results are needed to evaluate the performance of our new agent, and we will continue our efforts in further training and optimization of the RL agent in our pursuit of excellence in simulation autonomous racing.


BibTeX citation:

@mastersthesis{Zhang:EECS-2023-187,
    Author= {Zhang, Tianlun},
    Title= {Perception Stack for Indy Autonomous Challenge and Reinforcement Learning in Simulation Autonomous Racing},
    School= {EECS Department, University of California, Berkeley},
    Year= {2023},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-187.html},
    Number= {UCB/EECS-2023-187},
    Abstract= {This report presents an exploration of advanced perception stack for Indy Autonomous Racing (IAC) and reinforcement learning for autonomous racing in the simulation. The first part of the report investigates an efficient perception system that uses the inputs from a variety of sensors, namely cameras, RADAR, LiDAR. The objective of this system is to robustly detect and track other race cars during the race. We have successfully implemented a pure RADAR detection pipeline for long-range detection and a LiDAR-Camera detection pipeline for short-range detection. Both
pipelines work properly and efficiently to serve the requirements of perception for our race cars. Merging the detections from both and feeding it into the tracker gives
us a stable and promising perception output for the race. We won second place in the Texas Motor Speedway Race and third place in the Las Vegas Motor Speedway Race. The second part of the paper investigates the implementation and improvement of a Reinforcement Learning (RL) agent for autonomous racing in the simulation. Continuing from the successful RL agent we designed and implemented last year, we furthermore challenge the RL agent to achieve more stable and safer driving in the simulation environment. We redesigned the observation space and action space for the RL environment to reduce the model complexity and provide as much useful information to our agent as possible, hoping to help the agent understand the map better and make the model converge faster. Inspired by the breakthroughs in the Atari game by Agent57, we have adapted and enhanced our network structure and replaced the optimization policy to Soft Actor Critic (SAC), hoping the more robust network can better deal with the extracted features and result in a safe RL driving solution. The training is still underway, but the current results demonstrate that the new RL agent has the ability to learn to speed up on the straight and slow down upon turning. Future results are needed to evaluate the performance of our new agent, and we will continue our efforts in further training and optimization of
the RL agent in our pursuit of excellence in simulation autonomous racing.},
}

EndNote citation:

%0 Thesis
%A Zhang, Tianlun 
%T Perception Stack for Indy Autonomous Challenge and Reinforcement Learning in Simulation Autonomous Racing
%I EECS Department, University of California, Berkeley
%D 2023
%8 May 22
%@ UCB/EECS-2023-187
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-187.html
%F Zhang:EECS-2023-187