Daniel Ho

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2020-65

May 26, 2020

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-65.pdf

Deep learning methods are increasingly being used to run inference on a variety of classification and regression problems, ranging from credit scoring to advertising. However, due to their increasing complexity, these models lack the basic properties for establishing fairness and trust with users -- transparency and interpretability. While a majority of recent interpretability work involves post-hoc techniques, there has been little work in ante-hoc or intrinsically interpretable models. Historically, decision trees are the most popular method among intrinsically interpretable models as they naturally break down inference into a sequence of decisions. Recent attempts to combine the interpretability of decision trees with the representation power of neural networks have resulted in models that (1) perform poorly in comparison to state-of-the-art models even on small datasets (e.g. MNIST) and (2) require significant architectural changes. We address these issues by proposing Neural-Backed Decision Trees (NBDTs), which take the features produced by the convolutional layers of an existing neural network and convert the final fully-connected layer into a decision tree. NBDTs achieve competitive accuracy on CIFAR10, CIFAR100, TinyImageNet, and ImageNet, while setting state-of-the-art accuracy for interpretable models on Cityscapes, Pascal-Context, and Look Into Person (LIP). Furthermore, we demonstrate the interpretability of NBDTs by presenting qualitative and quantitative evidence of semantic interpretations for our model decisions.

Advisors: Kurt Keutzer and Joseph Gonzalez


BibTeX citation:

@mastersthesis{Ho:EECS-2020-65,
    Author= {Ho, Daniel},
    Title= {NBDT: Neural-Backed Decision Trees},
    School= {EECS Department, University of California, Berkeley},
    Year= {2020},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-65.html},
    Number= {UCB/EECS-2020-65},
    Abstract= {Deep learning methods are increasingly being used to run inference on a variety of classification and regression problems, ranging from credit scoring to advertising. However, due to their increasing complexity, these models lack the basic properties for establishing fairness and trust with users -- transparency and interpretability. While a majority of recent interpretability work involves post-hoc techniques, there has been little work in ante-hoc or intrinsically interpretable models. Historically, decision trees are the most popular method among intrinsically interpretable models as they naturally break down inference into a sequence of decisions. Recent attempts to combine the interpretability of decision trees with the representation power of neural networks have resulted in models that (1) perform poorly in comparison to state-of-the-art models even on small datasets (e.g. MNIST) and (2) require significant architectural changes. We address these issues by proposing Neural-Backed Decision Trees (NBDTs), which take the features produced by the convolutional layers of an existing neural network and convert the final fully-connected layer into a decision tree. NBDTs achieve competitive accuracy on CIFAR10, CIFAR100, TinyImageNet, and ImageNet, while setting state-of-the-art accuracy for interpretable models on Cityscapes, Pascal-Context, and Look Into Person (LIP). Furthermore, we demonstrate the interpretability of NBDTs by presenting qualitative and quantitative evidence of semantic interpretations for our model decisions.},
}

EndNote citation:

%0 Thesis
%A Ho, Daniel 
%T NBDT: Neural-Backed Decision Trees
%I EECS Department, University of California, Berkeley
%D 2020
%8 May 26
%@ UCB/EECS-2020-65
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-65.html
%F Ho:EECS-2020-65