Lianmin Zheng and Chengfan Jia and Minmin Sun and Zhao Wu and Cody Yu and Ali Ameri and Yida Wang and Jun Yang and Danyang Zhuo and Koushik Sen and Joseph Gonzalez and Ion Stoica

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2023-34

May 1, 2023

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-34.pdf

High-performance tensor programs are crucial to guarantee efficient execution of deep neural networks. However, obtaining performant tensor programs for different operators on various hardware platforms is notoriously challenging. Currently, deep learning systems rely on vendor-provided kernel libraries or various search strategies to get performant tensor programs. These approaches either require significant engineering effort to develop platform-specific optimization code or fall short of finding high-performance programs due to restricted search space and ineffective exploration strategy. We present Ansor, a tensor program generation framework for deep learning applications. Compared with existing search strategies, Ansor explores many more optimization combinations by sampling programs from a hierarchical representation of the search space. Ansor then fine-tunes the sampled programs with evolutionary search and a learned cost model to identify the best programs. Ansor can find high-performance programs that are outside the search space of existing state-of-the-art approaches. In addition, Ansor utilizes a task scheduler to simultaneously optimize multiple subgraphs in deep neural networks. We show that Ansor improves the execution performance of deep neural networks relative to the state-of-the-art on the Intel CPU, ARM CPU, and NVIDIA GPU by up to 3.8×, 2.6×, and 1.7×, respectively.

Advisors: Ion Stoica and Joseph Gonzalez


BibTeX citation:

@mastersthesis{Zheng:EECS-2023-34,
    Author= {Zheng, Lianmin and Jia, Chengfan and Sun, Minmin and Wu, Zhao and Yu, Cody and Ameri, Ali and Wang, Yida and Yang, Jun and Zhuo, Danyang and Sen, Koushik and Gonzalez, Joseph and Stoica, Ion},
    Title= {Ansor: Generating High-Performance Tensor Programs for Deep Learning},
    School= {EECS Department, University of California, Berkeley},
    Year= {2023},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-34.html},
    Number= {UCB/EECS-2023-34},
    Abstract= {High-performance tensor programs are crucial to guarantee efficient execution of deep neural networks. However, obtaining performant tensor programs for different operators on various hardware platforms is notoriously challenging. Currently, deep learning systems rely on vendor-provided kernel libraries or various search strategies to get performant tensor programs. These approaches either require significant engineering effort to develop platform-specific optimization code or fall short of finding high-performance programs due to restricted search space and ineffective exploration strategy. 
We present Ansor, a tensor program generation framework for deep learning applications. Compared with existing search strategies, Ansor explores many more optimization combinations by sampling programs from a hierarchical representation of the search space. Ansor then fine-tunes the sampled programs with evolutionary search and a learned cost model to identify the best programs. Ansor can find high-performance programs that are outside the search space of existing state-of-the-art approaches. In addition, Ansor utilizes a task scheduler to simultaneously optimize multiple subgraphs in deep neural networks. We show that Ansor improves the execution performance of deep neural networks relative to the state-of-the-art on the Intel CPU, ARM CPU, and NVIDIA GPU by up to 3.8×, 2.6×, and 1.7×, respectively.},
}

EndNote citation:

%0 Thesis
%A Zheng, Lianmin 
%A Jia, Chengfan 
%A Sun, Minmin 
%A Wu, Zhao 
%A Yu, Cody 
%A Ameri, Ali 
%A Wang, Yida 
%A Yang, Jun 
%A Zhuo, Danyang 
%A Sen, Koushik 
%A Gonzalez, Joseph 
%A Stoica, Ion 
%T Ansor: Generating High-Performance Tensor Programs for Deep Learning
%I EECS Department, University of California, Berkeley
%D 2023
%8 May 1
%@ UCB/EECS-2023-34
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-34.html
%F Zheng:EECS-2023-34