Krste Asanović and Ras Bodik and Bryan Christopher Catanzaro and Joseph James Gebis and Parry Husbands and Kurt Keutzer and David A. Patterson and William Lester Plishker and John Shalf and Samuel Webb Williams and Katherine A. Yelick

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2006-183

December 18, 2006

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-183.pdf

The recent switch to parallel microprocessors is a milestone in the history of computing. Industry has laid out a roadmap for multicore designs that preserves the programming paradigm of the past via binary compatibility and cache coherence. Conventional wisdom is now to double the number of cores on a chip with each silicon generation.

A multidisciplinary group of Berkeley researchers met nearly two years to discuss this change. Our view is that this evolutionary approach to parallel hardware and software may work from 2 or 8 processor systems, but is likely to face diminishing returns as 16 and 32 processor systems are realized, just as returns fell with greater instruction-level parallelism.

We believe that much can be learned by examining the success of parallelism at the extremes of the computing spectrum, namely embedded computing and high performance computing. This led us to frame the parallel landscape with seven questions, and to recommend the following: <ul> <li>The overarching goal should be to make it easy to write programs that execute efficiently on highly parallel computing systems <li>The target should be 1000s of cores per chip, as these chips are built from processing elements that are the most efficient in MIPS (Million Instructions per Second) per watt, MIPS per area of silicon, and MIPS per development dollar. <li>Instead of traditional benchmarks, use 13 "Dwarfs" to design and evaluate parallel programming models and architectures. (A dwarf is an algorithmic method that captures a pattern of computation and communication.) <li>"Autotuners" should play a larger role than conventional compilers in translating parallel programs. <li>To maximize programmer productivity, future programming models must be more human-centric than the conventional focus on hardware or applications. <li>To be successful, programming models should be independent of the number of processors. <li>To maximize application efficiency, programming models should support a wide range of data types and successful models of parallelism: task-level parallelism, word-level parallelism, and bit-level parallelism. <li>Architects should not include features that significantly affect performance or energy if programmers cannot accurately measure their impact via performance counters and energy counters. <li>Traditional operating systems will be deconstructed and operating system functionality will be orchestrated using libraries and virtual machines. <li>To explore the design space rapidly, use system emulators based on Field Programmable Gate Arrays (FPGAs) that are highly scalable and low cost. </ul>

Since real world applications are naturally parallel and hardware is naturally parallel, what we need is a programming model, system software, and a supporting architecture that are naturally parallel. Researchers have the rare opportunity to re-invent these cornerstones of computing, provided they simplify the efficient programming of highly parallel systems.


BibTeX citation:

@techreport{Asanović:EECS-2006-183,
    Author= {Asanović, Krste and Bodik, Ras and Catanzaro, Bryan Christopher and Gebis, Joseph James and Husbands, Parry and Keutzer, Kurt and Patterson, David A. and Plishker, William Lester and Shalf, John and Williams, Samuel Webb and Yelick, Katherine A.},
    Title= {The Landscape of Parallel Computing Research: A View from Berkeley},
    Year= {2006},
    Month= {Dec},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-183.html},
    Number= {UCB/EECS-2006-183},
    Abstract= {The recent switch to parallel microprocessors is a milestone in the history of computing. Industry has laid out a roadmap for multicore designs that preserves the programming paradigm of the past via binary compatibility and cache coherence. Conventional wisdom is now to double the number of cores on a chip with each silicon generation.

A multidisciplinary group of Berkeley researchers met nearly two years to discuss this change. Our view is that this evolutionary approach to parallel hardware and software may work from 2 or 8 processor systems, but is likely to face diminishing returns as 16 and 32 processor systems are realized, just as returns fell with greater instruction-level parallelism.

We believe that much can be learned by examining the success of parallelism at the extremes of the computing spectrum, namely embedded computing and high performance computing. This led us to frame the parallel landscape with seven questions, and to recommend the following:
<ul>
<li>The overarching goal should be to make it easy to write programs that execute efficiently on highly parallel computing systems
<li>The target should be 1000s of cores per chip, as these chips are built from processing elements that are the most efficient in MIPS (Million Instructions per Second) per watt, MIPS per area of silicon, and MIPS per development dollar.
<li>Instead of traditional benchmarks, use 13 "Dwarfs" to design and evaluate parallel programming models and architectures. (A dwarf is an algorithmic method that captures a pattern of computation and communication.)
<li>"Autotuners" should play a larger role than conventional compilers in translating parallel programs.
<li>To maximize programmer productivity, future programming models must be more human-centric than the conventional focus on hardware or applications. 
<li>To be successful, programming models should be independent of the number of processors.
<li>To maximize application efficiency, programming models should support a wide range of data types and successful models of parallelism: task-level parallelism, word-level parallelism, and bit-level parallelism.
<li>Architects should not include features that significantly affect performance or energy if programmers cannot accurately measure their impact via performance counters and energy counters.
<li>Traditional operating systems will be deconstructed and operating system functionality will be orchestrated using libraries and virtual machines.
<li>To explore the design space rapidly, use system emulators based on Field Programmable Gate Arrays (FPGAs) that are highly scalable and low cost.
</ul>

Since real world applications are naturally parallel and hardware is naturally parallel, what we need is a programming model, system software, and a supporting architecture that are naturally parallel. Researchers have the rare opportunity to re-invent these cornerstones of computing, provided they simplify the efficient programming of highly parallel systems.},
}

EndNote citation:

%0 Report
%A Asanović, Krste 
%A Bodik, Ras 
%A Catanzaro, Bryan Christopher 
%A Gebis, Joseph James 
%A Husbands, Parry 
%A Keutzer, Kurt 
%A Patterson, David A. 
%A Plishker, William Lester 
%A Shalf, John 
%A Williams, Samuel Webb 
%A Yelick, Katherine A. 
%T The Landscape of Parallel Computing Research: A View from Berkeley
%I EECS Department, University of California, Berkeley
%D 2006
%8 December 18
%@ UCB/EECS-2006-183
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-183.html
%F Asanović:EECS-2006-183