Extraction of Vehicle Trajectories from Online Video Streams

Xinhe Ren, David Wang, Michael Laskey and Ken Goldberg

EECS Department
University of California, Berkeley
Technical Report No. UCB/EECS-2018-44
May 10, 2018

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2018/EECS-2018-44.pdf

To collect extensive data on realistic driving behavior for use in simulation, we propose a framework that uses online public traffic cam video streams to extract data of driving behavior. To tackle challenges like frame-skip, perspective, and low resolution, we implement a Traffic Camera Pipeline (TCP). TCP leverages recent advances in deep learning for object detection and tracking to extract trajectories from the video stream to corresponding locations in a bird's eye view traffic simulator. After collecting 2618 vehicle trajectories, we compare learned models from the extracted data with those from a simulator and find that a held-out set of trajectories is more likely to occur under the learned models at two levels of traffic behavior: high-level behaviors describing where vehicles enter and exit the intersection, as well as the specific sequences of points traversed. The learned models can be used to generate and simulate more plausible driving behaviors.

Advisor: Ken Goldberg


BibTeX citation:

@mastersthesis{Ren:EECS-2018-44,
    Author = {Ren, Xinhe and Wang, David and Laskey, Michael and Goldberg, Ken},
    Title = {Extraction of Vehicle Trajectories from Online Video Streams},
    School = {EECS Department, University of California, Berkeley},
    Year = {2018},
    Month = {May},
    URL = {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2018/EECS-2018-44.html},
    Number = {UCB/EECS-2018-44},
    Abstract = {To collect extensive data on realistic driving behavior for use in simulation, we propose a framework that uses online public traffic cam video streams to extract data of driving behavior. To tackle challenges like frame-skip, perspective, and low resolution, we implement a Traffic Camera Pipeline (TCP). TCP leverages recent advances in deep learning for object detection and tracking to extract trajectories from the video stream to corresponding locations in a bird's eye view traffic simulator. After collecting 2618 vehicle trajectories, we compare learned models from the extracted data with those from a simulator and find that a held-out set of trajectories is more likely to occur under the learned models at two levels of traffic behavior: high-level behaviors describing where vehicles enter and exit the intersection, as well as the specific sequences of points traversed. The learned models can be used to generate and simulate more plausible driving behaviors.}
}

EndNote citation:

%0 Thesis
%A Ren, Xinhe
%A Wang, David
%A Laskey, Michael
%A Goldberg, Ken
%T Extraction of Vehicle Trajectories from Online Video Streams
%I EECS Department, University of California, Berkeley
%D 2018
%8 May 10
%@ UCB/EECS-2018-44
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2018/EECS-2018-44.html
%F Ren:EECS-2018-44