Input-Output Performance Evaluation: Self-Scaling Benchmarks, Predicted Performance

Peter Ming-Chien Chen

EECS Department
University of California, Berkeley
Technical Report No. UCB/CSD-92-714
November 1992

http://www2.eecs.berkeley.edu/Pubs/TechRpts/1992/CSD-92-714.pdf

Over the past 20 years, processor performance has been growing much faster than input/output (I/O) performance. As this occurs, overall system speed becomes more and more limited by the speed of I/O systems and hence I/O systems are evolving to keep up with processor performance. This evolution renders current I/O performance evaluation techniques obsolete or irrelevant, despite their increasing importance. This dissertation investigates two new ideas in I/O evaluation, self-scaling benchmarks and predicted performance.

This dissertation's self-scaling benchmark seeks to measure and report relevant workloads for a wide range of input/output systems. To do so, it scales aspects of its workload to account for the differences in I/O systems. For example, it dynamically discovers the size of the system's file cache and reports how performance varies both in and out of the file cache. The general approach taken is to scale based on the range of workloads the system performs well.

The self-scaling benchmark helps the evaluator gain insight into the system's performance by displaying how performance varies against each of five workload parameters: amount of file space, request size, fraction of reads, fraction of sequential accesses, and number of simultaneous accesses. The utility of the benchmark is demonstrated by running it on a wide variety of I/O systems, ranging from a single disk, low-end workstation to a mini-supercomputer with an array of four disks. On each system, the benchmark helps provide performance insights, such as the size of the file cache, the performance increases due to larger requests, the file cache's write policy, and the benefits of higher workload concurrency.

Predicted performance restores the ability to compare two machines on the same workload, which was lost in the self-scaling benchmark. Further, it extends this ability to workloads that have not been measured by estimating performance based on the graphs from the self-scaling benchmark. Prediction is accurate to within 10-15% over a wide range of I/O workloads and systems. This high level of accuracy demonstrates how a large workload space can be described using a few tens of points and a simple product form performance equation.

Advisor: David A. Patterson


BibTeX citation:

@phdthesis{Chen:CSD-92-714,
    Author = {Chen, Peter Ming-Chien},
    Title = {Input-Output Performance Evaluation:  Self-Scaling Benchmarks, Predicted Performance},
    School = {EECS Department, University of California, Berkeley},
    Year = {1992},
    Month = {Nov},
    URL = {http://www2.eecs.berkeley.edu/Pubs/TechRpts/1992/6261.html},
    Number = {UCB/CSD-92-714},
    Abstract = {Over the past 20 years, processor performance has been growing much faster than input/output (I/O) performance. As this occurs, overall system speed becomes more and more limited by the speed of I/O systems and hence I/O systems are evolving to keep up with processor performance. This evolution renders current I/O performance evaluation techniques obsolete or irrelevant, despite their increasing importance. This dissertation investigates two new ideas in I/O evaluation, self-scaling benchmarks and predicted performance. <p>This dissertation's self-scaling benchmark seeks to measure and report relevant workloads for a wide range of input/output systems. To do so, it scales aspects of its workload to account for the differences in I/O systems. For example, it dynamically discovers the size of the system's file cache and reports how performance varies both in and out of the file cache. The general approach taken is to scale based on the range of workloads the system performs well. <p>The self-scaling benchmark helps the evaluator gain insight into the system's performance by displaying how performance varies against each of five workload parameters: amount of file space, request size, fraction of reads, fraction of sequential accesses, and number of simultaneous accesses. The utility of the benchmark is demonstrated by running it on a wide variety of I/O systems, ranging from a single disk, low-end workstation to a mini-supercomputer with an array of four disks. On each system, the benchmark helps provide performance insights, such as the size of the file cache, the performance increases due to larger requests, the file cache's write policy, and the benefits of higher workload concurrency. <p>Predicted performance restores the ability to compare two machines on the same workload, which was lost in the self-scaling benchmark. Further, it extends this ability to workloads that have not been measured by estimating performance based on the graphs from the self-scaling benchmark. Prediction is accurate to within 10-15% over a wide range of I/O workloads and systems. This high level of accuracy demonstrates how a large workload space can be described using a few tens of points and a simple product form performance equation.}
}

EndNote citation:

%0 Thesis
%A Chen, Peter Ming-Chien
%T Input-Output Performance Evaluation:  Self-Scaling Benchmarks, Predicted Performance
%I EECS Department, University of California, Berkeley
%D 1992
%@ UCB/CSD-92-714
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/1992/6261.html
%F Chen:CSD-92-714