This section discusses a more complex CORBA application scenario, the Proxy Pattern [22], in which the proxy object acts as an interface between the CORBA clients and CORBA servers as shown in Figure 8 for three client and server objects, with the PPL script implementing this pattern for the cubit test type under OmniORB. The proxy pattern uses the basic client-server pattern as a component, extending it to a group of client-server pairs communicating through a proxy object. In this case we use three client server pairs under the proxy pattern which exhibit the client and server behaviors, respectively, while executing the cubit and throughput test types. The client contacts the corresponding server at run-time either by passing the object reference, or the server's name registered with the CORBA Naming Service, to the Proxy object which forwards the client request to the appropriate server. The data type used for the transfer of information between the clients and the proxy Object is CORBA ``Any''.
The proxy object can be used in two modes. In the first, the proxy only plays a role when establishing a connection between the client and server. In the other mode, the proxy actually routes the data between the objects, with a significant effect on performance. We present results for the second mode.
Figure 9 shows the performance of the omniORB and TAO client objects using the cubit test type under the proxy pattern on Linux, while Figure 10 shows the performance of clients under those ORBS as well as under CORBAplus on Solaris. The number of calls per second shown in Figure 9 are the average of the numbers of the three clients in both OmniORB and TAO. There was some non-trivial variance among clients for some tests and some ORBs, which would be another interesting point for further investigation. However, we illustrate the use and utility of our methods using the average results, within which there are several points of interest.
The most obvious point is that using the proxy object to mediate data transfer between client and server significantly impacts performance, reducing it to approximate 10 percent of that for the simple client-server pattern. Some impact is certainly expected due to the use of three concurrent client-server pairs, and reduction to 30 percent of the single pair performance would be plausible. Clearly, using a proxy object has a significant additional impact on performance. While not particularly surprising, this result emphasizes the importance of application scenario based testing. This pattern was, for example, discussed in a popular magazine [22] and is used by one of our colleagues as the basis for a WWW meta-search engine. Clearly, any developer contemplating such an architecture would be grateful to know the likely impact before implementing the software.
The second point of interest is that both TAO and OmniORB enjoy a significant performance increase in moving from Linux to Solaris, while the TAO performance for the client-server pattern was relatively constant between the two systems. A third significant observation is that the magnitude of the performance increase for OmniORB in moving from Linux to Solaris is much greater, increasing three to five fold in most cases. Finally, the difference between TAO and CORBAplus performance under Solaris is greater under this pattern than under the client-server pattern.
These observations support our assertion that application performance scenarios, performance patterns, should be part of any comprehensive benchmark. The comparative performance between different ORBs on the same operating system and between the same ORB on different operating systems changed significantly with the change in pattern. This also supports our idea that developers using performance results to select an ORB and operating system as an implementation platform should use test results for object architectures, performance patterns, which faithfully represent their proposed application.
We also changed the test type, as we did for the client-server pattern, to test the throughput performance among the object pairs. Figure 11 shows results for the OmniORB under Solaris while Figure 12 presents the throughput results for TAO. Both tests show throughput is reduced from five to ten fold. The TAO results still show an orderly increase in throughput with buffer size, although this converges to a level of 2 Mb/s for all but the smallest buffer size. OmniORB performance, in contrast, does not vary in nearly as orderly a manner with buffer or data set size, and does not converge to similar throughput for most buffer sizes. The performance using 8 KB, 4 KB, and 2 KB buffers is particularly interesting. As with the client-server pattern, the 4 KB buffer size provides the best performance, but 8 KB buffers do substantially better for small data set than large. This could easily be due to system level buffering effects.
Determining why the throughput varies in these ways with data and buffer size will require gathering information from the operating system layer to see if the networking protocols play a role, and gathering information from the ORB layer to see if there is an influence at that level. Section 4.3 illustrates how we might use the DSKI to gather protocol layer information, but discusses a simpler example.