Transferrable Representations for Visual Recognition
Jeffrey Donahue
EECS Department, University of California, Berkeley
Technical Report No. UCB/EECS-2017-106
May 14, 2017
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2017/EECS-2017-106.pdf
The rapid progress in visual recognition capabilities over the past several years can be attributed largely to improvements in generic and transferrable feature representations, particularly learned representations based on convolutional networks (convnets) trained “end-to-end” to predict visual semantics given raw pixel intensity values. In this thesis, we analyze the structure of these convnet representations and their generality and transferability to other tasks and settings.
We begin in Chapter 2 by examining the hierarchical semantic structure that naturally emerges in convnet representations from large-scale supervised training, even when this structure is unobserved in the training set. Empirically, the resulting representations generalize surprisingly well to classification in related yet distinct settings.
Chapters 3 and 4 showcase the flexibility of convnet-based representations for prediction tasks where the inputs or targets have more complex structure. Chapter 3 focuses on representation transfer to the object detection and semantic segmentation tasks in which objects must be localized within an image, as well as labeled. Chapter 4 augments convnets with recurrent structure to handle recognition problems with sequential inputs (e.g., video activity recognition) or outputs (e.g., image captioning). Across each of these domains, end-to-end fine-tuning of the representation for the target task provides a substantial additional performance benefit.
Finally, we address the necessity of label supervision for representation learning. In Chapter 5 we propose an unsupervised learning approach based on generative models, demonstrating that some of the transferrable semantic structure learned by supervised convnets can be learned from images alone.
Advisors: Trevor Darrell
BibTeX citation:
@phdthesis{Donahue:EECS-2017-106, Author= {Donahue, Jeffrey}, Title= {Transferrable Representations for Visual Recognition}, School= {EECS Department, University of California, Berkeley}, Year= {2017}, Month= {May}, Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2017/EECS-2017-106.html}, Number= {UCB/EECS-2017-106}, Abstract= {The rapid progress in visual recognition capabilities over the past several years can be attributed largely to improvements in generic and transferrable feature representations, particularly learned representations based on convolutional networks (convnets) trained “end-to-end” to predict visual semantics given raw pixel intensity values. In this thesis, we analyze the structure of these convnet representations and their generality and transferability to other tasks and settings. We begin in Chapter 2 by examining the hierarchical semantic structure that naturally emerges in convnet representations from large-scale supervised training, even when this structure is unobserved in the training set. Empirically, the resulting representations generalize surprisingly well to classification in related yet distinct settings. Chapters 3 and 4 showcase the flexibility of convnet-based representations for prediction tasks where the inputs or targets have more complex structure. Chapter 3 focuses on representation transfer to the object detection and semantic segmentation tasks in which objects must be localized within an image, as well as labeled. Chapter 4 augments convnets with recurrent structure to handle recognition problems with sequential inputs (e.g., video activity recognition) or outputs (e.g., image captioning). Across each of these domains, end-to-end fine-tuning of the representation for the target task provides a substantial additional performance benefit. Finally, we address the necessity of label supervision for representation learning. In Chapter 5 we propose an unsupervised learning approach based on generative models, demonstrating that some of the transferrable semantic structure learned by supervised convnets can be learned from images alone.}, }
EndNote citation:
%0 Thesis %A Donahue, Jeffrey %T Transferrable Representations for Visual Recognition %I EECS Department, University of California, Berkeley %D 2017 %8 May 14 %@ UCB/EECS-2017-106 %U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2017/EECS-2017-106.html %F Donahue:EECS-2017-106