Ren Wang

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2025-167

August 15, 2025

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2025/EECS-2025-167.pdf

Prior work has established Test-Time Training (TTT) as a general framework to further improve a trained model at test time. Before making a prediction on each test instance, the model is first trained on the same instance using a self-supervised task such as reconstruction. We extend TTT to the streaming setting, where multiple test instances - video frames in our case - arrive in temporal order. Our extension is online TTT: The current model is initialized from the previous model, then trained on the current frame and a small window of frames immediately before. Online TTT significantly outperforms the fixed-model baseline for four tasks, on three real-world datasets. The improvements are more than 2.2x and 1.5x for instance and panoptic segmentation. Surprisingly, online TTT also outperforms its offline variant that accesses strictly more information, training on all frames from the entire test video regardless of temporal order. This finding challenges those in prior work using synthetic videos. We formalize a notion of locality as the advantage of online over offline TTT, and analyze its role with ablations and a theory based on bias-variance trade-off.

Advisors: Alexei (Alyosha) Efros


BibTeX citation:

@mastersthesis{Wang:EECS-2025-167,
    Author= {Wang, Ren},
    Editor= {Efros, Alexei (Alyosha) and Malik, Jitendra},
    Title= {Test-Time Training on Video Streams},
    School= {EECS Department, University of California, Berkeley},
    Year= {2025},
    Month= {Aug},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2025/EECS-2025-167.html},
    Number= {UCB/EECS-2025-167},
    Abstract= {Prior work has established Test-Time Training (TTT) as a general framework to further improve a trained model at test time. Before making a prediction on each test instance, the
model is first trained on the same instance using a self-supervised task such as reconstruction. We extend TTT to the streaming setting, where multiple test instances - video frames in our case - arrive in temporal order. Our extension is online TTT: The current model is initialized from the previous model, then trained on the current frame and a small window of frames immediately before. Online TTT significantly outperforms the fixed-model baseline
for four tasks, on three real-world datasets. The improvements are more than 2.2x and 1.5x for instance and panoptic segmentation. Surprisingly, online TTT also outperforms its offline variant that accesses strictly more information, training on all frames from the entire
test video regardless of temporal order. This finding challenges those in prior work using synthetic videos. We formalize a notion of locality as the advantage of online over offline TTT, and analyze its role with ablations and a theory based on bias-variance trade-off.},
}

EndNote citation:

%0 Thesis
%A Wang, Ren 
%E Efros, Alexei (Alyosha) 
%E Malik, Jitendra 
%T Test-Time Training on Video Streams
%I EECS Department, University of California, Berkeley
%D 2025
%8 August 15
%@ UCB/EECS-2025-167
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2025/EECS-2025-167.html
%F Wang:EECS-2025-167