Fangda Gu
EECS Department
University of California, Berkeley
Technical Report No. UCB/EECS-2020-185
November 23, 2020
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-185.pdf
Graph Neural Networks (GNNs) are widely used deep learning models that learn meaningful representations from graph-structured data. Due to the finite nature of the underlying recurrent structure, current GNN methods may struggle to capture long-range dependencies in underlying graphs. To overcome this difficulty, we propose a graph learning framework, called Implicit Graph Neural Networks (IGNN), where predictions are based on the solution of a fixed-point equilibrium equation involving implicitly defined ``state'' vectors. We use the Perron-Frobenius theory to derive sufficient conditions that ensure well-posedness of the framework. Leveraging implicit differentiation, we derive a tractable projected gradient descent method to train the framework. Experiments on a comprehensive range of tasks show that IGNNs consistently capture long-range dependencies and outperform the state-of-the-art GNN models.
Advisor: Laurent El Ghaoui
BibTeX citation:
@mastersthesis{Gu:EECS-2020-185, Author = {Gu, Fangda}, Title = {Implicit Structures for Graph Neural Networks}, School = {EECS Department, University of California, Berkeley}, Year = {2020}, Month = {Nov}, URL = {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-185.html}, Number = {UCB/EECS-2020-185}, Abstract = {Graph Neural Networks (GNNs) are widely used deep learning models that learn meaningful representations from graph-structured data. Due to the finite nature of the underlying recurrent structure, current GNN methods may struggle to capture long-range dependencies in underlying graphs. To overcome this difficulty, we propose a graph learning framework, called Implicit Graph Neural Networks (IGNN), where predictions are based on the solution of a fixed-point equilibrium equation involving implicitly defined ``state'' vectors. We use the Perron-Frobenius theory to derive sufficient conditions that ensure well-posedness of the framework. Leveraging implicit differentiation, we derive a tractable projected gradient descent method to train the framework. Experiments on a comprehensive range of tasks show that IGNNs consistently capture long-range dependencies and outperform the state-of-the-art GNN models.} }
EndNote citation:
%0 Thesis %A Gu, Fangda %T Implicit Structures for Graph Neural Networks %I EECS Department, University of California, Berkeley %D 2020 %8 November 23 %@ UCB/EECS-2020-185 %U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-185.html %F Gu:EECS-2020-185