Lisa Hendricks

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2019-56

May 17, 2019

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-56.pdf

Powered by deep convolutional networks and large scale visual datasets, modern computer vision systems are capable of accurately recognizing thousands of visual categories. However, images contain much more than categorical labels: they contain information about where objects are located (in a forest or in a kitchen?), what attributes an object has (red or blue?), and how objects interact with other objects in a scene (is the child sitting on a sofa, or running in a field?). Natural language provides an efficient and intuitive way for visual systems to convey important information about a visual scene. We begin by considering a fundamental task as the intersection of language and vision: image captioning, in which a system receives an image as input and outputs a natural language sentence that describes the image. We consider two important shortcomings in modern image captioning models. First, in order to describe an object, like “otter”, captioning models require pairs of sentences and images which include the object “otter”. In Chapter 2, we build models that can learn an object like “otter” from classification data, which is abundant and easy to collect, then com- pose novel sentences at test time describing “otter”, without any “otter” image caption examples at train time. Second, visual description models can be heavily driven by biases found in the training dataset. This can lead to object hallucination in which models hallucinate objects not present in an image. In Chapter 3, we propose tools to analyze language bias through the lens of object hallucination. Language bias can also lead to bias amplification; e.g., if otters occur in 70% of train images, at test time a model might predict that otters occur in 85% of test images. We propose the Equalizer model in Chapter 4 to mitigate such bias in a special, yet important, case: gender bias. Moving on from captioning, we consider how systems which provide natural language text about an image can be used to help humans better understand an AI system. In Chapter 5, we propose to generate visual explanations with natural language, which rationalize the output of a deep visual classifier. We show these explanations can help humans understand when to accept or reject decisions made by an AI agent. Finally, in Chapter 6, we consider a new task at the intersection of language and vision: moment localization in videos with natural language. We detail the collection of a large scale dataset for this task, as well as the first models for moment localization.

Advisors: Trevor Darrell


BibTeX citation:

@phdthesis{Hendricks:EECS-2019-56,
    Author= {Hendricks, Lisa},
    Title= {Visual Understanding through Natural Language},
    School= {EECS Department, University of California, Berkeley},
    Year= {2019},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-56.html},
    Number= {UCB/EECS-2019-56},
    Abstract= {Powered by deep convolutional networks and large scale visual datasets, modern computer vision systems are capable of accurately recognizing thousands of visual categories. However, images contain much more than categorical labels: they contain information about where objects are located (in a forest or in a kitchen?), what attributes an object has (red or blue?), and how objects interact with other objects in a scene (is the child sitting on a sofa, or running in a field?). Natural language provides an efficient and intuitive way for visual systems to convey important information about a visual scene.
We begin by considering a fundamental task as the intersection of language and vision: image captioning, in which a system receives an image as input and outputs a natural language sentence that describes the image. We consider two important shortcomings in modern image captioning models. First, in order to describe an object, like “otter”, captioning models require pairs of sentences and images which include the object “otter”. In Chapter 2, we build models that can learn an object like “otter” from classification data, which is abundant and easy to collect, then com- pose novel sentences at test time describing “otter”, without any “otter” image caption examples at train time. Second, visual description models can be heavily driven by biases found in the training dataset. This can lead to object hallucination in which models hallucinate objects not present in an image. In Chapter 3, we propose tools to analyze language bias through the lens of object hallucination. Language bias can also lead to bias amplification; e.g., if otters occur in 70% of train images, at test time a model might predict that otters occur in 85% of test images. We propose the Equalizer model in Chapter 4 to mitigate such bias in a special, yet important, case: gender bias.
Moving on from captioning, we consider how systems which provide natural language text about an image can be used to help humans better understand an AI system. In Chapter 5, we propose to generate visual explanations with natural language, which rationalize the output of a deep visual classifier. We show these explanations can help humans understand when to accept or reject decisions made by an AI agent. Finally, in Chapter 6, we consider a new task at the intersection of language and vision: moment localization in videos with natural language. We detail the collection of a large scale dataset for this task, as well as the first models for moment localization.},
}

EndNote citation:

%0 Thesis
%A Hendricks, Lisa 
%T Visual Understanding through Natural Language
%I EECS Department, University of California, Berkeley
%D 2019
%8 May 17
%@ UCB/EECS-2019-56
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-56.html
%F Hendricks:EECS-2019-56