Analysis of Multithreaded Microprocessors under Multiprogramming

David E. Culler, Michial Gunter and James C. Lee

EECS Department
University of California, Berkeley
Technical Report No. UCB/CSD-92-687
May 1992

http://www2.eecs.berkeley.edu/Pubs/TechRpts/1992/CSD-92-687.pdf

Multithreading has been proposed as a means of tolerating long memory latencies in multiprocessor systems. Fundamentally, it allows multiple concurrent subsystems (cpu, network, and memory) to be utilized simultaneously. This is advantageous on uniprocessor systems as well, since the processor is utilized while the memory system services misses.

We examine multithreading on high-performance uniprocessors as a means of achieving better cost/performance on multiple processes. Processor utilization and cache behavior are studied both analytically and through simulation of timesharing and multithreading using interleaved reference traces. Multithreading is advantageous when one has large on-chip caches (32-kilobytes), associativity of two, and a memory access cost of roughly 50 instruction times. At this point, a small number of threads (2-4) is sufficient, the thread switch need not be extraordinarily fast, and the memory system need support only one or two outstanding misses. The increase in processor real-estate to support multithreading is modest, given the size of the cache and floating-point units.

A surprising observation is that miss ratios may be lower with multithreading than with timesharing under a steady-state load. This occurs because switch-on-miss multithreading introduces unfair thread scheduling, giving more CPU cycles to processes with better cache behavior.


BibTeX citation:

@techreport{Culler:CSD-92-687,
    Author = {Culler, David E. and Gunter, Michial and Lee, James C.},
    Title = {Analysis of Multithreaded Microprocessors under Multiprogramming},
    Institution = {EECS Department, University of California, Berkeley},
    Year = {1992},
    Month = {May},
    URL = {http://www2.eecs.berkeley.edu/Pubs/TechRpts/1992/5635.html},
    Number = {UCB/CSD-92-687},
    Abstract = {Multithreading has been proposed as a means of tolerating long memory latencies in multiprocessor systems. Fundamentally, it allows multiple concurrent subsystems (cpu, network, and memory) to be utilized simultaneously. This is advantageous on uniprocessor systems as well, since the processor is utilized while the memory system services misses. <p>We examine multithreading on high-performance uniprocessors as a means of achieving better cost/performance on multiple processes. Processor utilization and cache behavior are studied both analytically and through simulation of timesharing and multithreading using interleaved reference traces. Multithreading is advantageous when one has large on-chip caches (32-kilobytes), associativity of two, and a memory access cost of roughly 50 instruction times. At this point, a small number of threads (2-4) is sufficient, the thread switch need not be extraordinarily fast, and the memory system need support only one or two outstanding misses. The increase in processor real-estate to support multithreading is modest, given the size of the cache and floating-point units. <p>A surprising observation is that miss ratios may be lower with multithreading than with timesharing under a steady-state load. This occurs because switch-on-miss multithreading introduces unfair thread scheduling, giving more CPU cycles to processes with better cache behavior.}
}

EndNote citation:

%0 Report
%A Culler, David E.
%A Gunter, Michial
%A Lee, James C.
%T Analysis of Multithreaded Microprocessors under Multiprogramming
%I EECS Department, University of California, Berkeley
%D 1992
%@ UCB/CSD-92-687
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/1992/5635.html
%F Culler:CSD-92-687