Memory management and compression in vSphere 4.1

With vSphere 4.1, VMware released a great new feature called Memory compression. At first, after reading the release notes I thought memory compression was just one step before swapping to disk would occur. However, after reading the whitepaper “Understanding Memory Resource Management in VMware ESX 4” I learned some more details I want to share with you. For the full understanding do read the whitepaper as this post is a quick summary of how memory compression works, while the whitepaper gives you an in-depth view on memory management and also gives a lot of performance info on these techniques.

You probably already knew about Transparent Page Sharing, Ballooning and swapping but let’s go through them again rather fast. For guest OS-es that don’t use large pages ESX will store the virtual machine memory in 4K pages in hardware memory and will use Transparent Page Sharing to see if there are duplicate 4K pages (at host level) and will only store them once. If there is memory contention at host level, ESX will start the ballooning process. Ballooning will try to reclaim unused pages from the VMs and return them to the host. If after ballooning there is still memory contention, ESX will start swapping VM memory to harddisk.

When host and guest OS are capable of using large pages, the VM memory will be stored in 2MB pages without searching for duplicates with TPS. This is because the chances of finding duplicate 2MB pages are far more smaller than with 4K pages and the time to do this comparison would take too much time, however a hash of the 4K pages within the 2MB pages is still created. With large pages, ballooning is the first memory saving technique, but if this doesn’t solve the memory contention the ESX host will start the process of swapping memory to disk. In this process the 2MB pages will be broken into small 4K pages and the pre-generated hashes are used to share the small pages before they are swapped to disk.

Memory compression

With memory compression an extra step is added in both scenario’s. Just before memory is swapped out to disk, ESX will compress the memory. That is after ballooning and TPS have not resolved the memory contention. With memory compression ESX will try and compress the 4K swap candidate pages. When a compression of more than 50% is obtained the page (now 2K or less) will then be stored in the compression cache. If a compression of more than 50% is not obtained, the 4K page will be swapped out (to disk). If in a later stadium this compressed page is access by a VM, it will first be decompressed and removed from the compression cache.

Compression cache

This compression cache is memory inside of the virtual machines memory with a maximum of 10% (can be changed through the advanced settings Mem.MemZipMaxPct) which means ESX will not allocate additional host physical memory for the compression cache. When the VM memory is undercommitted no space is allocated to the compression cache, but as memory pressure grows, the space allocated for compression cache grows to a maximum of 10%.

Should the memory pressure keep growing and more memory is needed, pages from the compressed memory cache will be decompressed and swapped out.

Compression performance

As with other memory management techniques VMware introduced, people often are a bit skeptic when first reading about it and performance is always the number on concern. To try and ease your mind a bit, some facts:

– ESX will not pro-actively compress pages

– Compression will cost about 2-3% host cpu time

– Compression takes about 20 micro seconds

– The penalty of the extra CPU time and compression time is small compared to having to swap out to disk

– The compression used is based on GZip but adopted to specific ESX needs

22 thoughts on “Memory management and compression in vSphere 4.1

  1. Nice write up Gab, saves me reading the VMware doc again.

    One question.If the cache is inside of the VM's memory, would this mean; If a VM has 8GB RAM and MemZipMaxPct is set to 10%, 800MB of that VM's RAM might become unavailable?


  2. Thx. No 800MB of this VMs memory is compressed. The vm doesn't know if it's memory is compressed or not, it's transparent. The transparent memory cache only holds memory from that VM.

  3. Hi David,
    Yes in ESXtop you can see the ZIP/MB counter that shows the amount of MBs compressed at host level.


  4. I still love the idea that if you have 100 vm's all running the same version of windows it doesn't have 100 different windows instances in memory but only 1 in memory as its all the same unique code. aka in-memory De-dupe if you will. However, now it seems that's not the case as TPS doesn't work unless there is memory contention? and not sure how much it helps then since like you said it saves everything in 2mb pages which is unlikely to find another 2mb page even though it still will break those into 4k pages at some point before it swaps to disk? so are large pages really the way to do this vs. smaller pages? Why is that the default with 5500+ cpu's?

  5. It's the default because the hardware memory controller can be much more effectively utilized if it is dealing with large pages. The resources on the chip are finite and dealing with the 36.9M pages that would be needed on a host with 144GB of ram using 4KB pages is far beyond what can be crammed into the hardware MMU. Not to mention the fact that as most datastructures grow their performance decreases at a rate that is much higher than linear. Benchmarks I've seen show the hit for turning off large page tables to be between 5 and 15% depending on workload. Since vSphere will break down LPT's by bypassing the hardware MMU if it's under significant enough memory pressure there's little reason not to enable them.

  6. Do you dream of owning the beautiful designer handbags that you see celebrities carrying? Are you fashion conscious, but on a budget, want to be the envy of

    the crowd, and impress all of your family and friends,Gucci Handbags

    and then you have found the swingpacks

    net of your dreams! The handbag that you wear on your arm can make all the difference when it comes to your personal fashion look and styling. Our replica

    Gucci handbags will let you complete your fashion look without spending thousands of dollars. gucci mens bag

    Not all women can afford the high prices that designer handbags charge. Designer Replica Handbags lets you master your fashion wardrobe while still staying

    within your budget. We are the best site with superb Customer service and best Replicas. coach bags for cheap

    ‘Great attention is paid to detailing to ensure that your replica handbag looks as good as the real thing. This includes the materials that are used, exact

    duplication of stitching, and correct placement of the designer logo. Pleased, satisfied customers are what we desire, happy customers mean return customers,

    customers who will always come to gucci outlet
    net for all their replica handbag needs! If there is something we can do for you, please contact us, we will do our best to service for you.

  7. In 1989, Maurizio managed to persuade Dawn Mello, Gucci Tote

    whose revival of New York’s Bergdorf Goodman in the 1970s made her a star in the retail business, to join the newly formed gucci handbags on saleGroup as

    Executive Vice President and Creative Director Worldwide. At the helm of Gucci America was Domenico De Sole, a former lawyer who helped oversee Maurizio’s

    takeover of ten 1987 and carryalls

    The last addition to the creative team,designer gucci already included designers from Geoffrey Beene and Calvin Klein, was a young designer named Tom

    Ford.Raised in Texas and New Mexico, Gucci Canvas Bags

    he had been interested in fashion since his early teens but only decided to pursue a career as a gucci bag saledesigner after dropping out of Parsons School

    of Design in 1986 as an architecture major. gucci outlet online Dawn Mello hired Ford in 1990 at the urging of

    his partner, writer and editor Richard Buckley.
    cheap gucci

  8. One thing I noticed. If the cache is inside of the VM’s memory, would this mean; If a VM has 8GB RAM and MemZipMaxPct is set to 10%, 800MB of that VM’s RAM might become unavailable?

Comments are closed.