Efficient Resource Scheduling in Multiprocessors

Soumen Chakrabarti

EECS Department
University of California, Berkeley
Technical Report No. UCB/CSD-96-923
November 1996

http://www2.eecs.berkeley.edu/Pubs/TechRpts/1996/CSD-96-923.pdf

As multiprocessing becomes increasingly successful in scientific and commercial computing, parallel systems will be subjected to increasingly complex and challenging workloads. To ensure good job response and high resource utilization, algorithms are needed to allocate resources to jobs and to schedule the jobs. This problem is of central importance, and pervades systems research at diverse places such as compilers, runtime, applications, and operating systems. Despite the attention this area has received, scheduling problems in practical parallel computing still lack satisfactory solutions. The focus of system builders is to provide functionality and features; the resulting systems get so complex that many models and theoretical results lack applicability.

The focus of this thesis is in between the theory and practice of scheduling: it includes modeling, performance analysis and practical algorithmics. We present a variety of new techniques for scheduling problems relevant to parallel scientific computing. The thesis progresses from new compile-time algorithms for message scheduling through new runtime algorithms for processor scheduling to a unified framework for allocating multiprocessor resources to competing jobs while optimizing both individual application performance and system throughput.

The compiler algorithm schedules network communication for parallel programs accesing distributed arrays. By analyzing and optimizing communication patterns globally, rather than at the single statement level, we often reduce communication costs by factors of two to three in an implementation based on IBM's High-Performance Fortran compiler.

The best parallelizing compilers at present support regular, static, array-based parallelism. But parallel programmers are out-growing this model. Many scientific and commercial applications have a two-level structure: the outer level is a potentially irregular and dynamic task graph, and the inner level comprises relatively regular parallelism within each task. We give new runtime algorithms for allocating processors to such tasks. The result can be a twofold increase in effective megaflops, as seen from an implementation based on ScaLAPACK, a library of scientific software for scalable parallel machines.

Compilers and runtime systems target single programs. Other system software must do resource scheduling across multiple programs. For example, a database scheduler or a multiprocessor batch queuing system must allocate many kinds of resources between multiple programs. Some resources like processors may be traded for time, others, like memory, may not. Also, the goal is not to finish a fixed set of programs as fast as possible but to minimize the average response time of the programs, perhaps weighted by a priority. We present new algorithms for such problems.

Most of the above results assume a central scheduler with global knowledge. When the setting is distributed, decentralized techniques are needed. We study how decentralization and consequent local knowledge by per-processor schedulers affects load balance in fine-grained task-parallel applications. In particular, we give new protocols for distributed load balancing and bounds on the trade-off between locality and load balance. The analysis has been supported by experience with implementing task queues in Multipol, a library for coding dynamic, irregular parallel applications.

Advisor: Katherine A. Yelick


BibTeX citation:

@phdthesis{Chakrabarti:CSD-96-923,
    Author = {Chakrabarti, Soumen},
    Title = {Efficient Resource Scheduling in Multiprocessors},
    School = {EECS Department, University of California, Berkeley},
    Year = {1996},
    Month = {Nov},
    URL = {http://www2.eecs.berkeley.edu/Pubs/TechRpts/1996/5706.html},
    Number = {UCB/CSD-96-923},
    Abstract = {As multiprocessing becomes increasingly successful in scientific and commercial computing, parallel systems will be subjected to increasingly complex and challenging workloads. To ensure good job response and high resource utilization, algorithms are needed to allocate resources to jobs and to schedule the jobs. This problem is of central importance, and pervades systems research at diverse places such as compilers, runtime, applications, and operating systems. Despite the attention this area has received, scheduling problems in practical parallel computing still lack satisfactory solutions. The focus of system builders is to provide functionality and features; the resulting systems get so complex that many models and theoretical results lack applicability. <p>The focus of this thesis is in between the theory and practice of scheduling: it includes modeling, performance analysis and practical algorithmics. We present a variety of new techniques for scheduling problems relevant to parallel scientific computing. The thesis progresses from new compile-time algorithms for message scheduling through new runtime algorithms for processor scheduling to a unified framework for allocating multiprocessor resources to competing jobs while optimizing both individual application performance and system throughput. <p>The compiler algorithm schedules network communication for parallel programs accesing distributed arrays. By analyzing and optimizing communication patterns globally, rather than at the single statement level, we often reduce communication costs by factors of two to three in an implementation based on IBM's High-Performance Fortran compiler. <p>The best parallelizing compilers at present support regular, static, array-based parallelism. But parallel programmers are out-growing this model. Many scientific and commercial applications have a two-level structure: the outer level is a potentially irregular and dynamic task graph, and the inner level comprises relatively regular parallelism within each task. We give new runtime algorithms for allocating processors to such tasks. The result can be a twofold increase in effective megaflops, as seen from an implementation based on ScaLAPACK, a library of scientific software for scalable parallel machines. <p>Compilers and runtime systems target single programs. Other system software must do resource scheduling across multiple programs. For example, a database scheduler or a multiprocessor batch queuing system must allocate many kinds of resources between multiple programs. Some resources like processors may be traded for time, others, like memory, may not. Also, the goal is not to finish a fixed set of programs as fast as possible but to minimize the average response time of the programs, perhaps weighted by a priority. We present new algorithms for such problems. <p>Most of the above results assume a central scheduler with global knowledge. When the setting is distributed, decentralized techniques are needed. We study how decentralization and consequent local knowledge by per-processor schedulers affects load balance in fine-grained task-parallel applications. In particular, we give new protocols for distributed load balancing and bounds on the trade-off between locality and load balance. The analysis has been supported by experience with implementing task queues in Multipol, a library for coding dynamic, irregular parallel applications.}
}

EndNote citation:

%0 Thesis
%A Chakrabarti, Soumen
%T Efficient Resource Scheduling in Multiprocessors
%I EECS Department, University of California, Berkeley
%D 1996
%@ UCB/CSD-96-923
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/1996/5706.html
%F Chakrabarti:CSD-96-923