Rising Stars 2020:

Yujia Huang

PhD Candidate

California Institute of Technology


Areas of Interest

  • Artificial Intelligence

Poster

Neural Networks with Recurrent Generative Feedback

Abstract

Neural networks are vulnerable to input perturbations such as additive noise and adversarial attacks. In contrast, human perception is much more robust to such perturbations. The Bayesian brain hypothesis states that human brains use an internal generative model to update the posterior beliefs of the sensory input. This mechanism can be interpreted as a form of self-consistency between the maximum a posteriori (MAP) estimation of the internal generative model and the external environmental. Inspired by this, we enforce consistency in neural networks by incorporating generative recurrent feedback. We instantiate it on convolutional neural networks (CNNs). The proposed framework, termed Convolutional Neural Networks with Feedback (CNN-F), introduces a generative feedback with latent variables into existing CNN architectures, making consistent predictions via alternating MAP inference under a Bayesian framework. CNN-F shows considerably better adversarial robustness over regular feedforward CNNs on standard benchmarks.

Bio

Yujia Huang is a PhD candidate in Electrical Engineering at Caltech, advised by Prof Anima Anandkumar. She obtained her bachelor’s degree from Zhejiang University, China in 2017. Her research interests are in generative models, uncertainty quantification and biologically inspired machine learning, with an emphasis on vision tasks.

Personal home page