Rajiv Sambharya

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2018-52

May 11, 2018

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2018/EECS-2018-52.pdf

In this paper, we extend the lifted neural network framework (described in section 2) to apply to recurrent neural networks (RNNs). As with the general lifted neural network case, the activation functions are encoded via penalties in the training problem. The new framework allows for algorithms such as block-coordinate descent methods to be applied, in which each step is composed of a simple (no hidden layer) supervised learning problem that is parallelizable across data points and/or layers. The lifted methodology is particularly interesting in the case of recurrent neural networks because standard methods of optimization on recurrent neural networks perform poorly due to the vanishing and exploding gradient problems. Experiments on toy datasets indicate that our lifted model is more equipped to handle long-term dependencies and long sequences.

Advisors: Laurent El Ghaoui


BibTeX citation:

@mastersthesis{Sambharya:EECS-2018-52,
    Author= {Sambharya, Rajiv},
    Title= {Lifted Recurrent Neural Networks},
    School= {EECS Department, University of California, Berkeley},
    Year= {2018},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2018/EECS-2018-52.html},
    Number= {UCB/EECS-2018-52},
    Abstract= {In this paper, we extend the lifted neural network framework (described in section 2) to apply to recurrent neural networks (RNNs). As with the
general lifted neural network case, the activation functions are encoded via penalties in the training problem. The new framework allows for algorithms such as block-coordinate descent methods to be applied, in which each step is composed of a simple (no hidden layer) supervised learning problem that is parallelizable across data points and/or layers. The lifted methodology is particularly interesting in the case of recurrent neural networks because standard methods of optimization on recurrent neural networks perform poorly
due to the vanishing and exploding gradient problems. Experiments on toy datasets indicate that our lifted model is more equipped to handle long-term dependencies and long sequences.},
}

EndNote citation:

%0 Thesis
%A Sambharya, Rajiv 
%T Lifted Recurrent Neural Networks
%I EECS Department, University of California, Berkeley
%D 2018
%8 May 11
%@ UCB/EECS-2018-52
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2018/EECS-2018-52.html
%F Sambharya:EECS-2018-52