Nathan Pemberton

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2019-154

December 1, 2019

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-154.pdf

Researchers from industry and academia have recently proposed to disaggregate memory in warehouse-scale computers, motivated by the increasing performance of networks, and a proliferation of novel memory technologies. In a system with memory disaggregation, each compute node contains a modest amount of fast memory (e.g. high-bandwidth DRAM integrated on-package), while large capacity memory or non- volatile memory is made available across the network through dedicated memory nodes. One common proposal to harness the fast local memory is to use it as a large cache for the remote bulk memory. This cache could be implemented purely in hardware, which could minimize latency, but may involve complicated architectural changes and would lack OS insights into memory usage. An alternative is to manage the cache purely in software with traditional paging mechanisms. This approach requires no additional hardware, can use sophisticated algorithms, and has insight into memory usage patterns. However, our experiments show that even when paging to local memory, applications can be slowed significantly due to the overhead of handling page faults, which can take several microseconds and pollute the caches. In this thesis, I propose a hybrid HW/SW cache using a new hardware device called the “page fault accelerator” (PFA), with a special focus on the impact on operating system design and performance. With the PFA, applications spend 2.5x less time managing paging, and run 40% faster end-to-end.

Advisors: Randy H. Katz and John D. Kubiatowicz


BibTeX citation:

@mastersthesis{Pemberton:EECS-2019-154,
    Author= {Pemberton, Nathan},
    Editor= {Kubiatowicz, John D. and Katz, Randy H.},
    Title= {Enabling Efficient and Transparent Remote Memory Access in Disaggregated Datacenters},
    School= {EECS Department, University of California, Berkeley},
    Year= {2019},
    Month= {Dec},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-154.html},
    Number= {UCB/EECS-2019-154},
    Abstract= {Researchers from industry and academia have recently proposed to disaggregate memory in warehouse-scale computers, motivated by the increasing performance of networks, and a proliferation of novel memory technologies. In a system with memory disaggregation, each compute node contains a modest amount of fast memory (e.g. high-bandwidth DRAM integrated on-package), while large capacity memory or non- volatile memory is made available across the network through dedicated memory nodes. One common proposal to harness the fast local memory is to use it as a large cache for the remote bulk memory. This cache could be implemented purely in hardware, which could minimize latency, but may involve complicated architectural changes and would lack OS insights into memory usage. An alternative is to manage the cache purely in software with traditional paging mechanisms. This approach requires no additional hardware, can use sophisticated algorithms, and has insight into memory usage patterns. However, our experiments show that even when paging to local memory, applications can be slowed significantly due to the overhead of handling page faults, which can take several microseconds and pollute the caches. In this thesis, I propose a hybrid HW/SW cache using a new hardware device called the “page fault accelerator”
(PFA), with a special focus on the impact on operating system design and performance. With the PFA, applications spend 2.5x less time managing paging, and run 40% faster end-to-end.},
}

EndNote citation:

%0 Thesis
%A Pemberton, Nathan 
%E Kubiatowicz, John D. 
%E Katz, Randy H. 
%T Enabling Efficient and Transparent Remote Memory Access in Disaggregated Datacenters
%I EECS Department, University of California, Berkeley
%D 2019
%8 December 1
%@ UCB/EECS-2019-154
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-154.html
%F Pemberton:EECS-2019-154