Druv Pai
EECS Department
University of California, Berkeley
Technical Report No. UCB/EECS-2023-74
May 9, 2023
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-74.pdf
We consider the problem of learning maximally informative representations for data in a high-dimensional space with distribution supported on or around a single or multiple low-dimensional geometric structures, with or without labels. That is, we wish to compute a linear injective map (i.e., an “encoder”) such that the image of the data (i.e., the “representations”) have maximum “information gain”; we also want to compute a a suitable notion of inverse for the encoder (i.e., a “decoder”). We formulate this family of learning problems as a class of two-player games. For a broad notion of game-theoretic equilibria which is learnable via standard gradient-based optimization techniques, we show that the equilibrium solutions to games within the class indeed result in maximally informative representations and a consistent autoencoding. We then apply this framework to several instances of the closed-loop transcription (CTRL) framework, which has been recently proposed for learning discriminative and generative representations for data lying on low- dimensional submanifolds, obtaining desirable representations which provably emulate and extend ones given by classical theory. Finally, we present a novel optimization algorithm to obtain the particular equilibria which our theory desires, and prove its correctness in a restricted case.
Advisor: Yi Ma
BibTeX citation:
@mastersthesis{Pai:EECS-2023-74, Author = {Pai, Druv}, Title = {Learning Low-Dimensional Structure via Closed-Loop Transcription: Equilibria and Optimization}, School = {EECS Department, University of California, Berkeley}, Year = {2023}, Month = {May}, URL = {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-74.html}, Number = {UCB/EECS-2023-74}, Abstract = {We consider the problem of learning maximally informative representations for data in a high-dimensional space with distribution supported on or around a single or multiple low-dimensional geometric structures, with or without labels. That is, we wish to compute a linear injective map (i.e., an “encoder”) such that the image of the data (i.e., the “representations”) have maximum “information gain”; we also want to compute a a suitable notion of inverse for the encoder (i.e., a “decoder”). We formulate this family of learning problems as a class of two-player games. For a broad notion of game-theoretic equilibria which is learnable via standard gradient-based optimization techniques, we show that the equilibrium solutions to games within the class indeed result in maximally informative representations and a consistent autoencoding. We then apply this framework to several instances of the closed-loop transcription (CTRL) framework, which has been recently proposed for learning discriminative and generative representations for data lying on low- dimensional submanifolds, obtaining desirable representations which provably emulate and extend ones given by classical theory. Finally, we present a novel optimization algorithm to obtain the particular equilibria which our theory desires, and prove its correctness in a restricted case.} }
EndNote citation:
%0 Thesis %A Pai, Druv %T Learning Low-Dimensional Structure via Closed-Loop Transcription: Equilibria and Optimization %I EECS Department, University of California, Berkeley %D 2023 %8 May 9 %@ UCB/EECS-2023-74 %U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-74.html %F Pai:EECS-2023-74