 
 
 
 
 
 
   
We use all seven applications from the SPEC JVM98 benchmark suite [27] 
for our experiments that are briefly described in Figure ![[*]](/legacy/usr/local/teTeX/lib/latex2html/icons.gif/cross_ref_motif.gif) .
The benchmark programs can be run using three different
problem sizes, which are named as s100, s10 and s1. While the problem sizes are 
larger for the s100 dataset size, the sizes do not scale as designated by the
labels 100, 10 and 1.
In interest of simulation time, especially, the interpreted mode in s100 mode, 
we use the smaller problem size (s1). 
We believe that the s1 data set is representative of shorter running applications 
or applets. While some of the results from the s1 observations would be applicable 
to the larger data sets, it must be noted that the impact of garbage collection
and dynamic compilation impacts tend to change for the larger data sets. Thus,
we provide s10 and s100 energy breakdowns 
in Figure
.
The benchmark programs can be run using three different
problem sizes, which are named as s100, s10 and s1. While the problem sizes are 
larger for the s100 dataset size, the sizes do not scale as designated by the
labels 100, 10 and 1.
In interest of simulation time, especially, the interpreted mode in s100 mode, 
we use the smaller problem size (s1). 
We believe that the s1 data set is representative of shorter running applications 
or applets. While some of the results from the s1 observations would be applicable 
to the larger data sets, it must be noted that the impact of garbage collection
and dynamic compilation impacts tend to change for the larger data sets. Thus,
we provide s10 and s100 energy breakdowns 
in Figure ![[*]](/legacy/usr/local/teTeX/lib/latex2html/icons.gif/cross_ref_motif.gif) to emphasize this difference.
 to emphasize this difference.
 
 
 
 
