This section compares the micro-benchmark performance of SPIN-based Rhino and Digital UNIX-based Rhino to show how the extension architecture of SPIN improves the performance of critical functions. Table 2 shows the time breakdown of some important events.
Null call indicates a null system call overhead (on Digital UNIX, we
measured the latency of getpid
). SPIN is slower than
Digital UNIX, because the implementation of system call in SPIN requires the
use of additional mechanisms
to protect the kernel from the runtime failure of an
extension [unixemul].
Begin shows the latency of trans_begin
. Commit(ro)
is the time to commit a read-only transaction. Commit(8byte) is
the time to commit a transaction that modified 8 bytes on a single
page. Page diffing is used during commits.
Four numbers are shown for page faults. ``Read'' faults are caused by load instructions, and ``write'' faults by store instructions. ``Warm'' faults occur when database contents are in main memory. Thus, these are times with no disk I/O. ``Cold'' faults occur when database contents are not in main memory and require pages to be read from the disk.
The SPIN version outperforms the UNIX version for all events except the null call.
The performance difference is largest for warm page faults. There are
two reasons for this: (1) since the page fault handler in
SPIN runs in the kernel address space, it can eliminate most of the
user-kernel crossings, and (2) page table manipulation in
SPIN is more efficient than mprotect
used in Digital UNIX. In
SPIN, MMU can be manipulated by rewriting the MMU page table directly.
On the other hand, mprotect
requires more work, because
it must manipulate the memory object
map data structure to make its effect persist
regardless of paging activity.