Daniel Fried

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2021-247

December 1, 2021

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-247.pdf

This dissertation shows how language generation and interpretation across varied grounded domains can be improved through pragmatic inference: explicitly reasoning about the actions and intents of the people that the systems interact with. We train neural generation (speaker) and interpretation (listener) models which ground language into a world context, then layer a pragmatic inference procedure on top of these models. This pragmatic procedure predicts how human listeners will interpret text generated by the models and reasons counterfactually about why human speakers produced the text they did.

We begin by showing that explicit pragmatic inference aids in correctly generating and following natural language for complex, sequential grounded instruction tasks. Evaluation of language generation and interpretation shows that pragmatic inference improves state-of-the-art listener models (at correctly interpreting human instructions) and speaker models (at producing instructions correctly interpreted by humans).

Next, we describe extensions of this approach to vision-and-language navigation. We combine visually-grounded listener and speaker models, using the speaker model to synthesize new instructions for data augmentation in addition to evaluating candidate action sequences in pragmatic inference. Both models are supported by a panoramic action space that reflects the granularity of human-generated instructions. Experiments show that all three components of this approach---speaker-driven data augmentation, pragmatic inference and the panoramic action space---dramatically improve the performance of a baseline instruction follower, more than doubling the success rate over the best existing approach on a standard benchmark.

Finally, we present a grounded neural dialogue model that successfully collaborates with people in a partially-observable reference game. We focus on a setting where two agents each observe an overlapping part of a world context and need to identify and agree on some object they share. Therefore, the agents should pool their information and communicate pragmatically to solve the task. Our dialogue agent accurately grounds referents from the partner's utterances using a structured reference resolver, conditions on these referents using a recurrent memory, and uses a pragmatic generation procedure to ensure the partner can resolve the references the agent produces.

Advisors: Daniel Klein


BibTeX citation:

@phdthesis{Fried:EECS-2021-247,
    Author= {Fried, Daniel},
    Title= {Learning Grounded Pragmatic Communication},
    School= {EECS Department, University of California, Berkeley},
    Year= {2021},
    Month= {Dec},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-247.html},
    Number= {UCB/EECS-2021-247},
    Abstract= {This dissertation shows how language generation and interpretation across varied grounded domains can be improved through pragmatic inference: explicitly reasoning about the actions and intents of the people that the systems interact with. We train neural generation (speaker) and interpretation (listener) models which ground language into a world context, then layer a pragmatic inference procedure on top of these models. This pragmatic procedure predicts how human listeners will interpret text generated by the models and reasons counterfactually about why human speakers produced the text they did. 

We begin by showing that explicit pragmatic inference aids in correctly generating and following natural language for complex, sequential grounded instruction tasks. Evaluation of language generation and interpretation shows that pragmatic inference improves state-of-the-art listener models (at correctly interpreting human instructions) and speaker models (at producing instructions correctly interpreted by humans). 

Next, we describe extensions of this approach to vision-and-language navigation. We combine visually-grounded listener and speaker models, using the speaker model to synthesize new instructions for data augmentation in addition to evaluating candidate action sequences in pragmatic inference. Both models are supported by a panoramic action space that reflects the granularity of human-generated instructions. Experiments show that all three components of this approach---speaker-driven data augmentation, pragmatic inference and the panoramic action space---dramatically improve the performance of a baseline instruction follower, more than doubling the success rate over the best existing approach on a standard benchmark.

Finally, we present a grounded neural dialogue model that successfully collaborates with people in a partially-observable reference game. We focus on a setting where two agents each observe an overlapping part of a world context and need to identify and agree on some object they share. Therefore, the agents should pool their information and communicate pragmatically to solve the task. Our dialogue agent accurately grounds referents from the partner's utterances using a structured reference resolver, conditions on these referents using a recurrent memory, and uses a pragmatic generation procedure to ensure the partner can resolve the references the agent produces.},
}

EndNote citation:

%0 Thesis
%A Fried, Daniel 
%T Learning Grounded Pragmatic Communication
%I EECS Department, University of California, Berkeley
%D 2021
%8 December 1
%@ UCB/EECS-2021-247
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-247.html
%F Fried:EECS-2021-247