Mapspace Optimization for Tensor Computations with Bayesian Learning
J V Iniyaal Kannan
EECS Department, University of California, Berkeley
Technical Report No. UCB/EECS-2023-91
May 10, 2023
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-91.pdf
Tensor computations are becoming increasingly important with the emergence of fields such as AI, data analytics, and robotics. Memory access cost is the bottleneck in performance for these workloads. New architectures with specialized memory layouts and parallelizable elements are being designed for faster computation. To fully exploit such an architecture’s capabilities and achieve maximum improvement in performance, an optimal communication avoiding mapping from algorithm to hardware is needed. Manually finding this hardware-specific, energy efficient mapping is time-consuming and requires expertise in multiple domains. Traditional optimization methods like gradient descent are unsuccessful in finding an optimal mapping because the mapping space is non-smooth and non-convex. Other ML based feedback-driven approaches find good solutions, but do not generalise well to new architectures. In this paper, we propose using GPTune—an autotuning framework based on Bayesian optimization —to navigate this search space. Our experiments show that GPTune finds efficient mappings in far fewer iterations compared to Timeloop-mapper’s random search. GPTune also builds surrogate models that can be used for transfer learning and to potentially reduce the dimensionality of the mapspace. Furthermore, this paper analyses mapspace encodings that work best for tuning.
Advisors: James Demmel
BibTeX citation:
@mastersthesis{Kannan:EECS-2023-91, Author= {Kannan, J V Iniyaal}, Title= {Mapspace Optimization for Tensor Computations with Bayesian Learning}, School= {EECS Department, University of California, Berkeley}, Year= {2023}, Month= {May}, Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-91.html}, Number= {UCB/EECS-2023-91}, Abstract= {Tensor computations are becoming increasingly important with the emergence of fields such as AI, data analytics, and robotics. Memory access cost is the bottleneck in performance for these workloads. New architectures with specialized memory layouts and parallelizable elements are being designed for faster computation. To fully exploit such an architecture’s capabilities and achieve maximum improvement in performance, an optimal communication avoiding mapping from algorithm to hardware is needed. Manually finding this hardware-specific, energy efficient mapping is time-consuming and requires expertise in multiple domains. Traditional optimization methods like gradient descent are unsuccessful in finding an optimal mapping because the mapping space is non-smooth and non-convex. Other ML based feedback-driven approaches find good solutions, but do not generalise well to new architectures. In this paper, we propose using GPTune—an autotuning framework based on Bayesian optimization —to navigate this search space. Our experiments show that GPTune finds efficient mappings in far fewer iterations compared to Timeloop-mapper’s random search. GPTune also builds surrogate models that can be used for transfer learning and to potentially reduce the dimensionality of the mapspace. Furthermore, this paper analyses mapspace encodings that work best for tuning.}, }
EndNote citation:
%0 Thesis %A Kannan, J V Iniyaal %T Mapspace Optimization for Tensor Computations with Bayesian Learning %I EECS Department, University of California, Berkeley %D 2023 %8 May 10 %@ UCB/EECS-2023-91 %U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-91.html %F Kannan:EECS-2023-91