Micro-Domain Adaptation on Long-Running Videos

Victor Sun

EECS Department
University of California, Berkeley
Technical Report No. UCB/EECS-2020-91
May 29, 2020

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-91.pdf

Domain adaptation techniques are often used to fine-tune models and improve performance when the distribution of the test data differs from the distribution of the training data. Various domain adaptation techniques and datasets for object detection exist for images and short video sequences of length, but this has been a lot less widely studied in long-running videos of an hour or more. With longer videos there can be a significant domain gap between the beginning of the video and the end of the video, and there are currently no datasets that allow us to evaluate this. We aim to provide a diverse test dataset of long-running videos with noticeable domain shifts within the video to study micro-domain adaptation over a long sequence. These videos are taken from a variety of YouTube live-cam videos in cities around the world and contain labeled frames. We also discuss some potential self-supervised and semi-supervised online learning approaches to deal with concept drift in object-detection on longer-running videos.

Advisor: Joseph Gonzalez


BibTeX citation:

@mastersthesis{Sun:EECS-2020-91,
    Author = {Sun, Victor},
    Title = {Micro-Domain Adaptation on Long-Running Videos},
    School = {EECS Department, University of California, Berkeley},
    Year = {2020},
    Month = {May},
    URL = {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-91.html},
    Number = {UCB/EECS-2020-91},
    Abstract = {Domain adaptation techniques are often used to fine-tune models and improve performance when the distribution of the test data differs from the distribution of the training data. Various domain adaptation techniques and datasets for object detection exist for images and short video sequences of length, but this has been a lot less widely studied in long-running videos of an hour or more. With longer videos there can be a significant domain gap between the beginning of the video and the end of the video, and there are currently no datasets that allow us to evaluate this. We aim to provide a diverse test dataset of long-running videos with noticeable domain shifts within the video to study micro-domain adaptation over a long sequence. These videos are taken from a variety of YouTube live-cam videos in cities around the world and contain labeled frames. We also discuss some potential self-supervised and semi-supervised online learning approaches to deal with concept drift in object-detection on longer-running videos.}
}

EndNote citation:

%0 Thesis
%A Sun, Victor
%T Micro-Domain Adaptation on Long-Running Videos
%I EECS Department, University of California, Berkeley
%D 2020
%8 May 29
%@ UCB/EECS-2020-91
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-91.html
%F Sun:EECS-2020-91