Next: Correlation between notifications and
Up: New Analysis
Previous: Spatial locality
In this section, we study the distribution of loads placed on the web server
by different users. Our earlier analysis [1] examined the difference
in load distribution between wireless users and offline users. Now we look at
the load distribution at a more fine-grained level -- at a per-user level.
Figure 16 and Figure 17 show the total number of accesses and total number of data requested by different clients, respectively (users with invalid identifiers were discarded). As the
figures show, there is a significant variation in the load placed by different
users on the web server: some users request several orders of magnitude more
documents/data than other users. The accesses from only the wireless clients reveal
similar property. Thus, service providers can consider designing different pricing plans
that to cater to the widely varying needs of different users.
Figure 16:
Total number of accesses made by different users.
|
Figure 17:
Total number of data received by different users.
|
Figure 18 shows the inter-arrival time between
requests coming from the same user. The requests generated from the offline
users are much more bursty than those from the wireless users: 97% of the
requests from the offline users have 1 second or less inter-arrival time,
whereas only 9% of the requests from the wireless users have comparable
inter-arrival time. We observe very bursty traffic for offline PDA users
because their requests are generated by the downloader program rather than a
human being; these users also generate significantly more requests than
wireless users. If not handled appropriately, such bursts can delay wireless
users unnecessarily. The web site designers can address this problem in a
number of ways. For example, they can provide higher priority to wireless
users or restrict the burst of offline user requests to a few front-door
servers (servers that handle incoming HTTP requests). An orthogonal efficiency
issue that needs to be addressed is the synchronization protocol for PDAs,
i.e., instead of sending a large number of small requests, the synchronization
protocol could batch all these requests into a single request and reduce the
server load and roundtrip latency.
Figure 18:
CDF of inter-arrival time between consecutive requests from the same
user.
|
Next: Correlation between notifications and
Up: New Analysis
Previous: Spatial locality
Lili Qiu
2002-04-17