Language Guided Out-of-Distribution Detection
William Gan
EECS Department, University of California, Berkeley
Technical Report No. UCB/EECS-2021-139
May 18, 2021
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-139.pdf
In machine learning, most models are trained under the assumption that their test data will come from the same distribution as their training data. However, in the real world, this may not be true, necessitating a method to detect out-of-distribution (OOD) inputs. Thus far, prior works mostly evaluate when the OOD inputs are different classes, e.g. an image of a dog passed to a cat breed classifier. They do not consider OOD inputs that are of the same class but with a stylistic change, e.g. a cat under red lighting. In this work, we distinguish these two types as semantic and stylistic OOD data, respectively. We also propose to use a new modality, natural language, for the problem. As both the in-distribution dataset and stylistic OOD differences can be described with natural language, a model that utilizes it can be beneficial. We use OpenAI CLIP to encode style-contextual descriptions of our training dataset and at test time compare these to the encoded image. Our method, which we call DesCLIPtions, requires no additional training yet outperforms baselines for certain tasks. Overall, we conclude that natural language supervision is a promising direction for OOD detection.
Advisors: Trevor Darrell
BibTeX citation:
@mastersthesis{Gan:EECS-2021-139, Author= {Gan, William}, Title= {Language Guided Out-of-Distribution Detection}, School= {EECS Department, University of California, Berkeley}, Year= {2021}, Month= {May}, Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-139.html}, Number= {UCB/EECS-2021-139}, Abstract= {In machine learning, most models are trained under the assumption that their test data will come from the same distribution as their training data. However, in the real world, this may not be true, necessitating a method to detect out-of-distribution (OOD) inputs. Thus far, prior works mostly evaluate when the OOD inputs are different classes, e.g. an image of a dog passed to a cat breed classifier. They do not consider OOD inputs that are of the same class but with a stylistic change, e.g. a cat under red lighting. In this work, we distinguish these two types as semantic and stylistic OOD data, respectively. We also propose to use a new modality, natural language, for the problem. As both the in-distribution dataset and stylistic OOD differences can be described with natural language, a model that utilizes it can be beneficial. We use OpenAI CLIP to encode style-contextual descriptions of our training dataset and at test time compare these to the encoded image. Our method, which we call DesCLIPtions, requires no additional training yet outperforms baselines for certain tasks. Overall, we conclude that natural language supervision is a promising direction for OOD detection.}, }
EndNote citation:
%0 Thesis %A Gan, William %T Language Guided Out-of-Distribution Detection %I EECS Department, University of California, Berkeley %D 2021 %8 May 18 %@ UCB/EECS-2021-139 %U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-139.html %F Gan:EECS-2021-139