Extending Temporal-Vector Microarchitectures for Two-Dimensional Computations
Colin Schmidt
EECS Department, University of California, Berkeley
Technical Report No. UCB/EECS-2021-186
August 12, 2021
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-186.pdf
Modern computing is shaped by technology trends, like a slowing Moore’s law and lack of Dennard scaling, as well as application trends, like mass application of machine learning. Technology has constrained modern computer architectures to focus on energy-efficiency in order to improve, battery life, total cost of ownership, and even performance. Emerging deep-learning applications require computation volumes that increase exponentially and yet change in structure substantially every few years. One solution for both of these problems is specialized programmable architectures, that can adapt to new applications while specializing for the commonalities, and thus improving energy-efficiency.
This thesis presents a set of two-dimensional architecture extensions for Hwacha an exist- ing vector-fetch architecture designed to improve energy-efficiency on two-dimensional com- putation while remaining fully programmable. This thesis discusses the constraints modern CMOS process technologies place on such an architecture, and describes several silicon imple- mentations of similar architectures. Finally, this thesis presents the physical implementation of such extensions and their realized energy-efficiency gains on select applications.
Advisors: Krste Asanović
BibTeX citation:
@phdthesis{Schmidt:EECS-2021-186, Author= {Schmidt, Colin}, Title= {Extending Temporal-Vector Microarchitectures for Two-Dimensional Computations}, School= {EECS Department, University of California, Berkeley}, Year= {2021}, Month= {Aug}, Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-186.html}, Number= {UCB/EECS-2021-186}, Abstract= {Modern computing is shaped by technology trends, like a slowing Moore’s law and lack of Dennard scaling, as well as application trends, like mass application of machine learning. Technology has constrained modern computer architectures to focus on energy-efficiency in order to improve, battery life, total cost of ownership, and even performance. Emerging deep-learning applications require computation volumes that increase exponentially and yet change in structure substantially every few years. One solution for both of these problems is specialized programmable architectures, that can adapt to new applications while specializing for the commonalities, and thus improving energy-efficiency. This thesis presents a set of two-dimensional architecture extensions for Hwacha an exist- ing vector-fetch architecture designed to improve energy-efficiency on two-dimensional com- putation while remaining fully programmable. This thesis discusses the constraints modern CMOS process technologies place on such an architecture, and describes several silicon imple- mentations of similar architectures. Finally, this thesis presents the physical implementation of such extensions and their realized energy-efficiency gains on select applications.}, }
EndNote citation:
%0 Thesis %A Schmidt, Colin %T Extending Temporal-Vector Microarchitectures for Two-Dimensional Computations %I EECS Department, University of California, Berkeley %D 2021 %8 August 12 %@ UCB/EECS-2021-186 %U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-186.html %F Schmidt:EECS-2021-186