Learned Token Pruning for Efficient Transformer Inference
Sehoon Kim and Sheng Shen and David Thorsley and Amir Gholami and Woosuk Kwon and Joseph Hassoun and Kurt Keutzer
EECS Department, University of California, Berkeley
Technical Report No. UCB/EECS-2023-119
May 11, 2023
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-119.pdf
Efficient deployment of transformer models in practice is challenging due to their inference cost including memory footprint, latency, and power consumption, which scales quadratically with input sequence length. To address this, we present a novel token reduction method dubbed Learned Token Pruning (LTP) which adaptively removes unimportant tokens as an input sequence passes through transformer layers. In particular, LTP prunes tokens with an attention score below a threshold, whose value is learned for each layer during training. Our threshold-based method allows the length of the pruned sequence to vary adaptively based on the input sequence, and avoids algorithmically expensive operations such as top-k token selection. We extensively test the performance of LTP on GLUE and SQuAD tasks and show that our method outperforms the prior state-of-the-art token pruning methods by up to ~2.5% higher accuracy with the same amount of FLOPs. In particular, LTP achieves up to 2.1x FLOPs reduction with less than 1% accuracy drop, which results in up to 1.9x and 2.0x throughput improvement on Intel Haswell CPUs and NVIDIA V100 GPUs. Furthermore, we demonstrate that LTP is more robust than prior methods to variations in input sequence lengths.
Advisors: Kurt Keutzer
BibTeX citation:
@mastersthesis{Kim:EECS-2023-119, Author= {Kim, Sehoon and Shen, Sheng and Thorsley, David and Gholami, Amir and Kwon, Woosuk and Hassoun, Joseph and Keutzer, Kurt}, Title= {Learned Token Pruning for Efficient Transformer Inference}, School= {EECS Department, University of California, Berkeley}, Year= {2023}, Month= {May}, Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-119.html}, Number= {UCB/EECS-2023-119}, Abstract= {Efficient deployment of transformer models in practice is challenging due to their inference cost including memory footprint, latency, and power consumption, which scales quadratically with input sequence length. To address this, we present a novel token reduction method dubbed Learned Token Pruning (LTP) which adaptively removes unimportant tokens as an input sequence passes through transformer layers. In particular, LTP prunes tokens with an attention score below a threshold, whose value is learned for each layer during training. Our threshold-based method allows the length of the pruned sequence to vary adaptively based on the input sequence, and avoids algorithmically expensive operations such as top-k token selection. We extensively test the performance of LTP on GLUE and SQuAD tasks and show that our method outperforms the prior state-of-the-art token pruning methods by up to ~2.5% higher accuracy with the same amount of FLOPs. In particular, LTP achieves up to 2.1x FLOPs reduction with less than 1% accuracy drop, which results in up to 1.9x and 2.0x throughput improvement on Intel Haswell CPUs and NVIDIA V100 GPUs. Furthermore, we demonstrate that LTP is more robust than prior methods to variations in input sequence lengths.}, }
EndNote citation:
%0 Thesis %A Kim, Sehoon %A Shen, Sheng %A Thorsley, David %A Gholami, Amir %A Kwon, Woosuk %A Hassoun, Joseph %A Keutzer, Kurt %T Learned Token Pruning for Efficient Transformer Inference %I EECS Department, University of California, Berkeley %D 2023 %8 May 11 %@ UCB/EECS-2023-119 %U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-119.html %F Kim:EECS-2023-119