Trap Architectures for Lisp Systems

Douglas Johnson

EECS Department
University of California, Berkeley
Technical Report No. UCB/CSD-88-470
November 1988

http://www2.eecs.berkeley.edu/Pubs/TechRpts/1988/CSD-88-470.pdf

Recent measurements of Lisp systems show a dramatic skewing of operation frequency. For example, small integer (fixnum) arithmetic dominates most programs, but other number types can occur on almost any operation. Likewise, few memory references trigger special banding for garbage collection, but nearly all memory operations could trigger such special handling. Systems like SPARC and SPUR have shown that small amounts of special hardware can significantly reduce the need for inline software checks by trapping when an unusual condition is detected.

A system's trapping architecture now becomes key to performance. In most systems, the trap architecture is intended to handle errors (e.g., address faults) or conditions requiring large amounts of processing (e.g., page faults). The requirements for Lisp traps are quite different. In particular, the trap frequency is higher, processing time per trap is shorter, and most need to be handled in the user's address space and context.

This paper looks at these requirements, evaluates current trap architectures, and proposes enhancements for meeting those requirements. These enhancements increase performance for Lisp 9%-32% at cost of about 1.4% more CPU logic.


BibTeX citation:

@techreport{Johnson:CSD-88-470,
    Author = {Johnson, Douglas},
    Title = {Trap Architectures for Lisp Systems},
    Institution = {EECS Department, University of California, Berkeley},
    Year = {1988},
    Month = {Nov},
    URL = {http://www2.eecs.berkeley.edu/Pubs/TechRpts/1988/5733.html},
    Number = {UCB/CSD-88-470},
    Abstract = {Recent measurements of Lisp systems show a dramatic skewing of operation frequency. For example, small integer (fixnum) arithmetic dominates most programs, but other number types can occur on almost any operation. Likewise, few memory references trigger special banding for garbage collection, but nearly all memory operations could trigger such special handling. Systems like SPARC and SPUR have shown that small amounts of special hardware can significantly reduce the need for inline software checks by trapping when an unusual condition is detected.   <p>A system's trapping architecture now becomes key to performance. In most systems, the trap architecture is intended to handle errors (e.g., address faults) or conditions requiring large amounts of processing (e.g., page faults). The requirements for Lisp traps are quite different. In particular, the trap frequency is higher, processing time per trap is shorter, and most need to be handled in the user's address space and context.   <p>This paper looks at these requirements, evaluates current trap architectures, and proposes enhancements for meeting those requirements. These enhancements increase performance for Lisp 9%-32% at cost of about 1.4% more CPU logic.}
}

EndNote citation:

%0 Report
%A Johnson, Douglas
%T Trap Architectures for Lisp Systems
%I EECS Department, University of California, Berkeley
%D 1988
%@ UCB/CSD-88-470
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/1988/5733.html
%F Johnson:CSD-88-470