Nikhil Mishra and Mostafa Rohaninejad and Xi Chen and Pieter Abbeel

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2018-32

May 8, 2018

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2018/EECS-2018-32.pdf

Deep neural networks excel in regimes with large amounts of data, but tend to struggle when data is scarce or when they need to adapt quickly to changes in the task. In response, recent work in meta-learning proposes training a meta-learner on a distribution of similar tasks, in the hopes of generalization to novel but related tasks by learning a high-level strategy that captures the essence of the problem it is asked to solve. However, many recent meta-learning approaches are extensively hand-designed, either using architectures specialized to a particular application, or hard-coding algorithmic components that constrain how the meta-learner solves the task. We propose a class of simple and generic meta-learner architectures that use a novel combination of temporal convolutions and soft attention; the former to aggregate information from past experience and the latter to pinpoint specific pieces of information. In the most extensive set of meta-learning experiments to date, we evaluate the resulting Simple Neural AttentIve Learner (or SNAIL) on several heavily-benchmarked tasks. On all tasks, in both supervised and reinforcement learning, SNAIL attains state-of-the-art performance by significant margins.

Advisors: Pieter Abbeel


BibTeX citation:

@mastersthesis{Mishra:EECS-2018-32,
    Author= {Mishra, Nikhil and Rohaninejad, Mostafa and Chen, Xi and Abbeel, Pieter},
    Title= {A Simple Neural Attentive Meta-Learner},
    School= {EECS Department, University of California, Berkeley},
    Year= {2018},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2018/EECS-2018-32.html},
    Number= {UCB/EECS-2018-32},
    Abstract= {Deep neural networks excel in regimes with large amounts of data, but tend to struggle when data is scarce or when they need to adapt quickly to changes in the task. In response, recent work in meta-learning proposes training a meta-learner on a distribution of similar tasks, in the hopes of generalization to novel but related tasks by learning a high-level strategy that captures the essence of the problem it is asked to solve. However, many recent meta-learning approaches are extensively hand-designed, either using architectures specialized to a particular application, or hard-coding algorithmic components that constrain how the meta-learner solves the task. We propose a class of simple and generic meta-learner architectures that use a novel combination of temporal convolutions and soft attention; the former to aggregate information from past experience and the latter to pinpoint specific pieces of information. In the most extensive set of meta-learning experiments to date, we evaluate the resulting Simple Neural AttentIve Learner (or SNAIL) on several heavily-benchmarked tasks. On all tasks, in both supervised and reinforcement
learning, SNAIL attains state-of-the-art performance by significant margins.},
}

EndNote citation:

%0 Thesis
%A Mishra, Nikhil 
%A Rohaninejad, Mostafa 
%A Chen, Xi 
%A Abbeel, Pieter 
%T A Simple Neural Attentive Meta-Learner
%I EECS Department, University of California, Berkeley
%D 2018
%8 May 8
%@ UCB/EECS-2018-32
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2018/EECS-2018-32.html
%F Mishra:EECS-2018-32