Junghoon Han

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2024-59

May 7, 2024

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-59.pdf

Restricted Boltzmann Machines (RBM) have gained attention for their strength in aiding Monte Carlo simulations for Combinatorial Optimization, Quantum Applications, and Machine Learning problems. Convolutional RBM (CRBM), a variant of RBM, has sparked interest due to its lower parameter counts and efficient performance for translationally-symmetric problems. However, the stochastic nature of CRBM often makes it take long duration to reach the ground-state solution, demanding an approach to accelerate the computation process.

In this work, we demonstrate our hardware accelerator for CRBM, implemented in RTL and programmed on FPGA. Software applications can harness the accelerator by simply programming the weights, bias, and lattice sizes. We show that for solving frustrated classical Hamiltonians for Ising Shastry-Sutherland model, our hardware accelerates the reaching of ground-state solution by up to 5 orders of magnitude compared to GPUs.

Advisors: Sayeef Salahuddin


BibTeX citation:

@mastersthesis{Han:EECS-2024-59,
    Author= {Han, Junghoon},
    Title= {Hardware Accelerator for Convolutional Restricted Boltzmann Machines},
    School= {EECS Department, University of California, Berkeley},
    Year= {2024},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-59.html},
    Number= {UCB/EECS-2024-59},
    Abstract= {Restricted Boltzmann Machines (RBM) have gained attention for their strength in aiding Monte Carlo simulations for Combinatorial Optimization, Quantum Applications, and Machine Learning problems. Convolutional RBM (CRBM), a variant of RBM, has sparked interest due to its lower parameter counts and efficient performance for translationally-symmetric problems. However, the stochastic nature of CRBM often makes it take long duration to reach the ground-state solution, demanding an approach to accelerate the computation process.

In this work, we demonstrate our hardware accelerator for CRBM, implemented in RTL and programmed on FPGA. Software applications can harness the accelerator by simply programming the weights, bias, and lattice sizes. We show that for solving frustrated classical Hamiltonians for Ising Shastry-Sutherland model, our hardware accelerates the reaching of ground-state solution by up to 5 orders of magnitude compared to GPUs.},
}

EndNote citation:

%0 Thesis
%A Han, Junghoon 
%T Hardware Accelerator for Convolutional Restricted Boltzmann Machines
%I EECS Department, University of California, Berkeley
%D 2024
%8 May 7
%@ UCB/EECS-2024-59
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-59.html
%F Han:EECS-2024-59