I'll use the below graphs to tell a story of 64-bit performance. In the past, your WAS application was using 32-bit memory spaces. Hardware vendors started making machines 64-bit. You moved your application to a 64-bit hardware/OS platform hoping you'd get improved performance. Instead you saw the application decrease in performance to 85% of the original 32-bit performance and the heap requirements went up by almost 50% (shown by the comparison between the first bar and second bar in each graph). Ouch!
The reason for the heap requirements going up is simple. The memory references are now twice the size as before. The reason for this decrease in performance is actually very much related to the increase in memory. The memory references under the covers of Java became twice the size increasing the size of memory structures in the WAS runtime and your application's objects. Unfortunately the processor memory cache sizes didn't get larger at the same time. This means more memory cache misses, which means more busy work for the hardware dealing with the larger memory, which means worse application performance.
We introduced 64-bit support in WAS 6.1 for customers that needed to store database caches, etc. in memory that were larger than 32-bit addressability. Of course, 64-bit support for these applications is a major win as it's always faster to do things in memory as compared to trying to do this sort of processing in 32-bit while offloading to disk.
In WAS V6.1, we introduced a simple answer to the problem for users who really only needed 32-bit address spaces. We started to support WAS 32-bit on 64-bit OS'es essentially avoiding the problem (which moves you back to the first bar in each graph). However, that didn't help users who needed process sizes larger than the 32-bit OS process size limit but less than full 64-bit addressability (who really needs 16.8 million terabytes?!) and made managing deployments that needed both 32-bit and 64-bit applications very complex.
In WAS V7.0 we introduce compressed reference (CR) technology. CR technology allows WAS 64-bit to allocate large heaps without the memory footprint growth and performance overhead. Using CR technology instances can allocate heap sizes up to 28GB with similar physical memory consumption as an equivalent 32-bit deployment (btw, I am seeing more and more applications that fall into this category -- only "slightly larger" than the 32-bit OS process limit). For applications with larger memory requirements, full 64-bit addressing will kick in as needed. The CR technology allows your applications to use just enough memory and have maximum performance, no matter where along the 32-bit/64-bit address space spectrum your application falls.
So returning to the original problem of 85% performance and almost 50% memory growth, how does 64-bit WAS V7.0 measure up on the original application? The third bar on the chart is what you get "out of the box" on WAS V7.0 64-bit. Now you'll see performance within 5% of 32-bit performance with less than 3% growth in heap requirements. Rather impressive what a Java virtual machines can do with no changes to the application! Let's see a C/C++ program do the same so easily.
I'm interested in feedback from users who have tried WAS 64-bit (either with WAS V6.1 or 7.0).
1 comments:
So what causes the 3% increase in memory footprint and 5% drop in performance when 64-bit is supposed to give higher perf due to allowing more registers to be used?
I can't complain too much though. Now that we have compressed pointers on 3 JVMs, I can now run a 6GB heap without needing far more memory and suffering the perf hits.
I'll happily settle for more RAM at close to 32-bit perf... for now.
Post a Comment