Ronghang Hu

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2020-50

May 17, 2020

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-50.pdf

Vision-and-language tasks (such as answering a question about an image, grounding a referring expression, or following a natural language instruction to navigate through a visual environment) require jointly modeling and reasoning over the two modalities of image and text. We have witnessed significant progress in joint visual and linguistic reasoning, often through neural approaches trained with the help of larger datasets and more computation resources. However, is solving these vision-and-language tasks as simple as building models with more parameters and training them on more data? If not, how can we build better reasoning models that are data-efficient and generalize well?

This thesis provides an answer to the above question with structured models for vision-and-language reasoning – models with architectures that take into account the patterns and regularities in human language, visual scenes, and agents’ skills. We begin with the task of referring expression grounding, where we show that significantly better accuracy and generalization can be achieved by taking into account the compositional structures in these expressions with our proposed Compositional Modular Networks (CMNs) in Chapter 2. We further address the visual question answering task in Chapter 3 with the End-to-End Module Networks (N2NMNs) based on dynamic compositional modules that align with the reasoning steps in the questions. In Chapter 4, we extend our work on modular reasoning and propose the Stack Neural Module Networks (SNMNs) that automatically induce a proper module layout with interpretable reasoning steps. Beyond modular reasoning, we also propose to construct context-aware representations of the visual scene with Language-Conditioned Graph Networks (LCGNs) in Chapter 5 for relational reasoning, and address the problem of reading text in images for question answering with iterative pointer-augmented multimodal transformers in Chapter 6. Finally, we show that embodied tasks also require structured models, and propose the Speaker-Follower models for the navigational instruction following task in Chapter 7 with the pair of a speaker model and a follower model that complement each other. In all these scenarios, we show that by taking into account the structures in the tasks and the input modalities, our models perform and generalize significantly better than their unstructured counterparts.

Advisors: Trevor Darrell


BibTeX citation:

@phdthesis{Hu:EECS-2020-50,
    Author= {Hu, Ronghang},
    Title= {Structured Models for Vision-and-Language Reasoning},
    School= {EECS Department, University of California, Berkeley},
    Year= {2020},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-50.html},
    Number= {UCB/EECS-2020-50},
    Abstract= {Vision-and-language tasks (such as answering a question about an image, grounding a referring expression, or following a natural language instruction to navigate through a visual environment) require jointly modeling and reasoning over the two modalities of image and text. We have witnessed significant progress in joint visual and linguistic reasoning, often through neural approaches trained with the help of larger datasets and more computation resources. However, is solving these vision-and-language tasks as simple as building models with more parameters and training them on more data? If not, how can we build better reasoning models that are data-efficient and generalize well?

This thesis provides an answer to the above question with structured models for vision-and-language reasoning – models with architectures that take into account the patterns and regularities in human language, visual scenes, and agents’ skills. We begin with the task of referring expression grounding, where we show that significantly better accuracy and generalization can be achieved by taking into account the compositional structures in these expressions with our proposed Compositional Modular Networks (CMNs) in Chapter 2. We further address the visual question answering task in Chapter 3 with the End-to-End Module Networks (N2NMNs) based on dynamic compositional modules that align with the reasoning steps in the questions. In Chapter 4, we extend our work on modular reasoning and propose the Stack Neural Module Networks (SNMNs) that automatically induce a proper module layout with interpretable reasoning steps. Beyond modular reasoning, we also propose to construct context-aware representations of the visual scene with Language-Conditioned Graph Networks (LCGNs) in Chapter 5 for relational reasoning, and address the problem of reading text in images for question answering with iterative pointer-augmented multimodal transformers in Chapter 6. Finally, we show that embodied tasks also require structured models, and propose the Speaker-Follower models for the navigational instruction following task in Chapter 7 with the pair of a speaker model and a follower model that complement each other. In all these scenarios, we show that by taking into account the structures in the tasks and the input modalities, our models perform and generalize significantly better than their unstructured counterparts.},
}

EndNote citation:

%0 Thesis
%A Hu, Ronghang 
%T Structured Models for Vision-and-Language Reasoning
%I EECS Department, University of California, Berkeley
%D 2020
%8 May 17
%@ UCB/EECS-2020-50
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-50.html
%F Hu:EECS-2020-50