Saavan Patel

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2024-37

May 1, 2024

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-37.pdf

As the demand for big data increases and the speed of traditional CPUs cannot keep pace, new computing paradigms and architectures are needed to meet the demands for our data hungry world. To keep pace with this, Ising Computing and probabilistic computing have emerged as a method to solve NP-Hard optimization problems (such as logistics, place and route in circuits), perform Machine Learning training and inference, model decision making in animal brains, and much more.

This work centers around parallelized computing algorithms and hardware based on probabilistic formulations of the Ising Model, known as Boltzmann Machines. The algorithms demonstrated use techniques from machine learning, structured algorithmic mapping, and mixed approaches. Using these algorithms we demonstrate potential solutions to 16 bit Integer Factorization, 3-SAT, LDPC Codes and scalable techniques for solutions to a variety of problems.

We map these algorithms to a variety of hardware, from small and medium scale (1000s of nodes) FPGA approaches to large and ultra large (>100,000 nodes) scale on GPU and TPU instances. These accelerated instances demonstrate state of the art performance on the MAX-CUT benchmark problem.

We finally demonstrate the Parallel Asynchronous Stochastic Sampler (PASS), a neuromorphic, clock-free accelerator that mimics brain-like asynchronous computation using the Ising Model. This has the potential for orders of magnitude speed increase over traditional methods for solving these problems while being the first on-chip, fully CMOS, demonstration of such an architecture.

Advisors: Sayeef Salahuddin


BibTeX citation:

@phdthesis{Patel:EECS-2024-37,
    Author= {Patel, Saavan},
    Title= {Probabilistic Ising Architectures for Combinatorial Optimization, Machine Learning and Neuromorphic Computing},
    School= {EECS Department, University of California, Berkeley},
    Year= {2024},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-37.html},
    Number= {UCB/EECS-2024-37},
    Abstract= {As the demand for big data increases and the speed of traditional CPUs cannot keep pace, new computing paradigms and architectures are needed to meet the demands for our data hungry world. To keep pace with this, Ising Computing and probabilistic computing have emerged as a method to solve NP-Hard optimization problems (such as logistics, place and route in circuits), perform Machine Learning training and inference, model decision making in animal brains, and much more.

This work centers around parallelized computing algorithms and hardware based on probabilistic formulations of the Ising Model, known as Boltzmann Machines. The algorithms demonstrated use techniques from machine learning, structured algorithmic mapping, and mixed approaches. Using these algorithms we demonstrate potential solutions to 16 bit Integer Factorization, 3-SAT, LDPC Codes and scalable techniques for solutions to a variety of problems.

We map these algorithms to a variety of hardware, from small and medium scale (1000s of nodes) FPGA approaches to large and ultra large (>100,000 nodes) scale on GPU and TPU instances. These accelerated instances demonstrate state of the art performance on the MAX-CUT benchmark problem. 

We finally demonstrate the Parallel Asynchronous Stochastic Sampler (PASS), a neuromorphic, clock-free accelerator that mimics brain-like asynchronous computation using the Ising Model. This has the potential for orders of magnitude speed increase over traditional methods for solving these problems while being the first on-chip, fully CMOS, demonstration of such an architecture.},
}

EndNote citation:

%0 Thesis
%A Patel, Saavan 
%T Probabilistic Ising Architectures for Combinatorial Optimization, Machine Learning and Neuromorphic Computing
%I EECS Department, University of California, Berkeley
%D 2024
%8 May 1
%@ UCB/EECS-2024-37
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-37.html
%F Patel:EECS-2024-37