Kevin Chiang

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2019-38

May 14, 2019

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-38.pdf

Scene representation, the process of converting visual data into efficient, accurate features, is essential for the development of general robot intelligence. This task is drawn from human experience, as humans generally take in a novel scene by indicating important features and objects in the scene before planning their actions around these features. Recently, the Generative Query Network (GQN) was developed, which takes in random viewpoints of a scene, constructs an internal representation, and uses the representation to predict the image from an arbitrary viewpoint of the scene. GQNs have shown that it is possible to learn accurate representations of various scenes without human labels or prior domain knowledge, but one limiting factor that remains is the fact that the input viewpoints are chosen randomly. By training an agent to learn where to capture the input observations, we can supply the GQN with more useful and unique data. We show that an agent can learn through reinforcement learning (RL) to select input viewpoints that provide much more useful information than random inputs, leading to better representations, and thus more complete reconstructions, which may lead to improvements in tasks with complex environments.

Advisors: Pieter Abbeel


BibTeX citation:

@mastersthesis{Chiang:EECS-2019-38,
    Author= {Chiang, Kevin},
    Title= {Using Reinforcement Learning to Learn Input Viewpoints for Scene Representation},
    School= {EECS Department, University of California, Berkeley},
    Year= {2019},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-38.html},
    Number= {UCB/EECS-2019-38},
    Abstract= {Scene representation, the process of converting visual data into efficient, accurate features, is essential for the development of general robot intelligence. This task is drawn from human experience, as humans generally take in a novel scene by indicating important features and objects in the scene before planning their actions around these features. Recently, the Generative Query Network (GQN) was developed, which takes in random viewpoints of a scene, constructs an internal representation, and uses the representation to predict the image from an arbitrary viewpoint of the scene. GQNs have shown that it is possible to learn accurate representations of various scenes without human labels or prior domain knowledge, but one limiting factor that remains is the fact that the input viewpoints are chosen randomly. By training an agent to learn where to capture the input observations, we can supply the GQN with more useful and unique data. We show that an agent can learn through reinforcement learning (RL) to select input viewpoints that provide much more useful information than random inputs, leading to better representations, and thus more complete reconstructions, which may lead to improvements in tasks with complex environments.},
}

EndNote citation:

%0 Thesis
%A Chiang, Kevin 
%T Using Reinforcement Learning to Learn Input Viewpoints for Scene Representation
%I EECS Department, University of California, Berkeley
%D 2019
%8 May 14
%@ UCB/EECS-2019-38
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-38.html
%F Chiang:EECS-2019-38