David McAllister

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2024-177

August 9, 2024

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-177.pdf

Score distillation sampling (SDS) has proven to be an important tool, enabling the use of large-scale diffusion priors for tasks operating in data-poor domains. Unfortunately, SDS has a number of characteristic artifacts that limit its usefulness in general-purpose applications. In this paper, we make progress toward understanding the behavior of SDS and its variants by viewing them as solving an optimal-cost transport path from a source distribution to a target distribution. Under this new interpretation, these methods seek to transport corrupted images (source) to the natural image distribution (target). We argue that current methods' characteristic artifacts are caused by (1) linear approximation of the optimal path and (2) poor estimates of the source distribution. We show that calibrating the text conditioning of the source distribution can produce high-quality generation and translation results with little extra overhead. Our method can be easily applied across many domains, matching or beating the performance of specialized methods. We demonstrate its utility in text-to-2D, text-based NeRF optimization, translating paintings to real images, optical illusion generation, and 3D sketch-to-real. We compare our method to existing approaches for score distillation sampling and show that it can produce high-frequency details with realistic colors.

Advisors: Angjoo Kanazawa


BibTeX citation:

@mastersthesis{McAllister:EECS-2024-177,
    Author= {McAllister, David},
    Editor= {Kanazawa, Angjoo and Efros, Alexei (Alyosha)},
    Title= {Extending Data Priors Across Domains with Diffusion Distillation},
    School= {EECS Department, University of California, Berkeley},
    Year= {2024},
    Month= {Aug},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-177.html},
    Number= {UCB/EECS-2024-177},
    Abstract= {Score distillation sampling (SDS) has proven to be an important tool, enabling the use of large-scale diffusion priors for tasks operating in data-poor domains. Unfortunately, SDS has a number of characteristic artifacts that limit its usefulness in general-purpose applications. In this paper, we make progress toward understanding the behavior of SDS and its variants by viewing them as solving an optimal-cost transport path from a source distribution to a target distribution. Under this new interpretation, these methods seek to transport corrupted images (source) to the natural image distribution (target). We argue that current methods' characteristic artifacts are caused by (1) linear approximation of the optimal path and (2) poor estimates of the source distribution. We show that calibrating the text conditioning of the source distribution can produce high-quality generation and translation results with little extra overhead. Our method can be easily applied across many domains, matching or beating the performance of specialized methods. We demonstrate its utility in text-to-2D, text-based NeRF optimization, translating paintings to real images, optical illusion generation, and 3D sketch-to-real. We compare our method to existing approaches for score distillation sampling and show that it can produce high-frequency details with realistic colors.},
}

EndNote citation:

%0 Thesis
%A McAllister, David 
%E Kanazawa, Angjoo 
%E Efros, Alexei (Alyosha) 
%T Extending Data Priors Across Domains with Diffusion Distillation
%I EECS Department, University of California, Berkeley
%D 2024
%8 August 9
%@ UCB/EECS-2024-177
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-177.html
%F McAllister:EECS-2024-177