ESX Memory Management – Part 1

I receive a lot of questions lately about ESX memory management. Things that are very obvious to me seem to be not so obvious at all for some other people. So I’ll try to explain these things from my point of view.

First let’s have a look at the virtual machine settings available to us. On the vm setting page we have several options we can configure for memory assignment.

  1. Allocated memory: This is the amount of memory we assign to the vm and is also the amount of memory the guest OS will see as its physical memory. This is a hard limit and the vm cannot exceed this limit if it demands more memory. It is configured on the hardware tab of the vm’s settings.
  2. Reservations: A reservation is a guaranteed amount of memory assigned to the vm. This is a way of ensuring that the vm gets a minimal amount of memory assigned. When this reservation cannot be met, you will be unable to start the vm. This is known as “Admission Control”. Reservations are set on the resources tab of the vm’s settings and by default there is no reservation set.
  3. Limits: A limit is a restriction on the vm, so it cannot use more memory than this limit. If you would set this limit lower than the allocated memory value, the ballooning driver will start to inflate as soon as the vm demands more memory than this limit. Limits are set on the resources tab of the vm’s settings and by default the limit is set to “unlimited”.Now that we know of limits and reservations, we need to have a quick look at the VMkernel swap file. This swap file is used by the VMkernel to swap out the vm’s memory as a last resort to free up memory when the host is running out of it. When we set a reservation, that memory is guaranteed and cannot be swapped out to disk. So whenever a vm starts up, the VMkernel creates a swap file which has a size of the limit minus the reservation. For example we have a vm with a 1024MB limit and a reservation of 512MB. The swap file created will be 1024MB – 512MB = 512MB. If we would set the reservation to 1024MB there won’t be a swap file created at all. Remember that by default there are no reservations and no limits set, so the swap file created for each vm will be the same size as the allocated memory.
  4. Shares: With shares you set a relative importance on a vm. Unlike limits and reservation which are fixed, shares can change dynamically. Remember that the share system only comes into play when memory resources are scarce and contention is occurring. Shares are set on the resources tab of the vm’s settings and can be set to “low”, “normal”, “high” or a custom value.
    low = 5 shares per 1MB allocated to the vm
    normal = 10 shares per 1MB allocated to the vm
    high = 20 shares per 1MB allocated to the vm
    It is important to note that the more memory you assign to a vm, the more shares it receives.Let’s look at an example to show you how this share system works. Say you have 5 vms with each 2,000MB memory allocated and the share value set to “normal”. The ESX host only has 4,000MB of physical machine memory available for virtual machines. Each vm receives 20,000 shares according to the “normal” setting (10 * 2,000). The sum of all shares is 5 * 20,000 = 100,000. Every vm will receive an equal share of 20,000/100,000 = 1/5th of the resources available = 4,000/5 = 800MB.Now we change the shares setting on 1 vm to “High”, which results in this vm receiving 40,000 shares instead of 20,000. The sum of all shares is now increased to 120,000. This vm will receive 40,000/120,000 = 1/3rd of the resources available. Thus 4,000/3 = 1333 MB. All the other vms will receive only 20,000/120,000 = 1/6th of the available resources = 4,000/6 = 666 MB

Instead of configuring these settings on a vm basis, it is also possible to configure these settings on a resource pool. A VMware ESX resource pool is a pool of CPU and memory resources. I always look to the resource pool as a group of VMs.

This concludes the memory settings we can configure on a vm. Next time I will go into ESX memory management techniques.

Continue reading Part2

Related posts:

  1. VMware Storage Sudoku Tweet Last Friday I was brainstorming with Gabrie van Zanten about the optimal placement of the VMDKs across our LUNs. We tried to come up with an algorithm that could...
  2. Unable to login to your ESX server Tweet Ivo Beerens posted this article last week on the defunct cimservera processes that render an ESX Host unmanageable. See also this VMWare KB Article. Symptoms include: Unable to log...
  3. Unattended upgrade of HP management agents Tweet After upgrading to ESX 3.5 to update3, I found out that the HP management agents needed to be upgraded to version 8.1.1, since this version supports ESX3.5 update3. So...
  4. HOW-TO: Recover from failed Storage VMotion Tweet A while ago I received a request from the storage department to move a whole ESX cluster to another storage I/O-Group. This would be a disruptive action. I was...
  5. VMware ESX(i) 3.5 Update4 released Tweet VMware has released ESX(i) 3.5 update4. Do not forget to read the release notes here or you can go to the download page here While going through the release notes...

29 Comments on “ESX Memory Management – Part 1”

  1. #1 Duncan
    on Apr 27th, 2009 at 4:29 pm

    Great post arnim,

    But when you set a limit of 512 on a 1024MB VM and the GOS requires more than the 512MB isn’t it the VM SWAP file that is being used instead of RAM? The balloon driver isn’t used for this as far as I know.

  2. #2 Scott Lowe
    on Apr 27th, 2009 at 4:40 pm

    Duncan, I do believe you are correct: when you set a limit, you are specifying the maximum amount of physical RAM that may be supplied to a VM. The rest is supplied from VMkernel swap space.

    Otherwise, great article!

  3. #3 VMGuru
    on Apr 27th, 2009 at 4:47 pm

    Duncan,

    From my experience, settings limits lower than allocated memory is nasty business. The first thing that will happen is that the guests will start to kick off the balloon driver. I’ve seen some environments where a bad template was deployed where all VM’s were hit with a 256 or 512MB limit on 1024MB of memory. Ballooning is intense, and in many cases fails to deflate, causing swap. It got to be so bad that we actually created specific rules for the scenario of 1) Limits set < Allocated and 2) Balloon driver failing to deflate after 10 minutes in the monitoring product we developed.

    Scott

  4. #4 Gabrie van Zanten
    on Apr 27th, 2009 at 4:55 pm

    Think Duncan is correct since ballooning would happen when the ESX host has not enough memory and therefore has to get it back from other VMs. Ballooning will only reclaim unused memory. In this specific example ESX has probably enough memory left and will not try to reclaim anything.

    Well, that was what I was thinking thus far….

    In my lab, I found that Arnim is right!

    Citrix VM, 2GB RAM assigned, no reservation, no limit
    ESXTOP says:
    Host level – current MEMCTL = 250 / Target MEMCTL = 2057
    Host level – current SWAP = 245 / Target SWAP = 232
    VM level MEMSZ = 2048, SZTGT = 1588

    Now I changed the memory limit to 750MB:
    ESXTOP says:
    Host level – current MEMCTL = 2659 / Target MEMCTL = 2659
    Host level – current SWAP = 245 / Target SWAP = 212
    VM level MEMSZ = 2048, SZTGT = 878
    SWCUR remains 0 (SWAP current)
    MCTLSZ goes up to 1050 !!!

    So Duncan and my thoughts on this are wrong. ESX will start ballooning.

    I then reset the limit to unlimited and after 5 min, these are the figures:
    MCTLSZ = 0
    SWCUR = 0

  5. #5 Tim Curless
    on Apr 27th, 2009 at 5:04 pm

    I don’t know that the GOS would get involved given that it thinks it has 1024MB to allocate. Once the GOS goes past 512MB in use the balloon driver begins to inflate causing the GOS’ native memory management and garbage collection to kick in. This “encourages” the GOS to not surpass 512MB of memory in use. At least this is how I understand it; I could be way off base.

  6. #6 Tim Curless
    on Apr 27th, 2009 at 5:05 pm

    I forgot to mention that this is indeed a great post :)

  7. #7 Dave Convery
    on Apr 27th, 2009 at 5:54 pm

    Yes, I agree with Duncan. I believe it will use swap in this scenario. The baloon is only called if there is contention for pRAM.

  8. #8 Gabrie van Zanten
    on Apr 27th, 2009 at 7:57 pm

    After having a nice discussion on twitter, the big difference is that a when setting a limit on a VM, the host will react to the VM as if the host’s memory is exhausted. And when host memory is exhausted, there will first be ballooning, after ballooning freed as much Guest OS memory as possible, host swapping will occur.

  9. #9 Arnim van Lieshout
    on Apr 27th, 2009 at 7:58 pm

    First of all, Thank you for all your comments.!

    I’ll give you the one and only answer to the question. When you set the limit lower than the allocated memory the ONLY way to make sure the GOS isn’t using more memory than the limit is to inflate the balloon driver.
    The allocated memory is the size of the GOS’s physical memory. The GOS in unaware that there’s a limit configured. So there must be some technique to prevent the GOS growing behind this limit from inside the GOS, and that is exactly what the balloon driver does.
    When ESX would simply SWAP the memory to disk it would only prevent the GOS from having more memory assigned to machine memory. It would not limit the GOS to grow beyond the limit. So the GOS could use more memory than the limit, allthough not all of its memory would be in active machine memory, and this doesn’t make sense regarding to setting a cap on memory usage.

  10. #10 Arnim van Lieshout
    on Apr 27th, 2009 at 8:09 pm

    When there’s memory contention there’s a complete different story. The ESX kernel would use ballooning in favour of SWAP as long as the reclamation state is “high” or “soft”. When the reclamation state changes to “hard” the kernel relies on SWAP to forcibly reclaim memory. In the “low” state is will eventualy block the execution off VMs that are above their target allocations.

    I’ll have a blog post comming up on ESX memory assignment en reclaiming techniques.

  11. #11 Gabrie van Zanten
    on Apr 27th, 2009 at 8:12 pm

    Arnim
    The ballooning will NOT prevent the guest from using more then the limit. If a VM gets assigned 1500MB, but gets a 1GB limit, then the Guest OS can use max 1500MB, but there will never be more then 1GB Host memory involved !!!

    Gabrie

  12. #12 Arnim van Lieshout
    on Apr 27th, 2009 at 8:32 pm

    Gabrie,

    I think my comment is a bit badly worded. What I meant is that the GOS cannot use more than the limit for application use.
    The guest OS can only use up to the full allocated size because the balloon driver is part of the GOS.

    and as far as for my swapping statement. Both techniques limit the usage of active machine memory, but ballooning is in favour because the native GOS memory management techniques can be utilized.

  13. #13 Memory Behavior when VM Limits are Set - Storage Informer
    on Apr 27th, 2009 at 9:46 pm

    [...] OS has a limit set that is under its total assigned value.  The conversation was kicked off from Arnim Van Lieshout’s blog post on memory management.  This is NOT a good scenario to have in your ESX environment, and I have [...]

  14. #14 Arnim van Lieshout
    on Apr 28th, 2009 at 11:45 am

    After reading Scott Herold’s follow up post and some lab testing, I must admit that my assumption was wrong. In my assumption whenever the limit is lower than the allocated space the balloon driver would inflate to occupy the difference between the allocated size and the limit, and would not deflate to prevent the GOS to grow beyond the limit.
    But whenever the GOS hasn’t claimed more memory than the limit, there’s no need to balloon because the GOS isn’t trespassing.
    So the memory reclaiming techniques would only kick in whenever the GOS is trespassing the limit and ballooning is in favour of swapping, that’s why ballooning will occur first.

    Be sure to read Scott’s article:
    http://www.vmguru.com/index.php/articles-mainmenu-62/mgmt-and-monitoring-mainmenu-68/96-memory-behavior-when-vm-limits-are-set

    -Arnim

  15. #15 Duncan
    on May 3rd, 2009 at 2:46 pm

    I now understand why I usually see swapping and not ballooning. The limits, where I witnessed it, where set low(512MB)… on an active VM now way it would be able to balloon after a while and continue with swapping.

  16. #16 Harry
    on May 5th, 2009 at 12:53 pm

    Hi Arnim,

    in Part one of this nice Post yoou stated that shares are calculated by x shares per MB memory. (5,10,20-low,normal,high). In your example you write 5 vm’s each of 2GB of memory and set to normal would be 10 * 1000 = 10000 shares each. Maybe i misunderstand here something completly but shouldnt it be 20.000 shares per vm? 10 * 2000 MB ?

    I am aware at the end it does not male any difference, its just for me to understand this better.

    regards

    Harry

  17. #17 Arnim van Lieshout
    on May 5th, 2009 at 7:49 pm

    Harry,

    offcourse, you are absolutely right.
    I changed the values in the post.
    Thanks.

    - Arnim

  18. #18 VMware ESX minnehåndtering | Lars Jostein Silihagen
    on May 31st, 2009 at 9:43 pm

    [...] Part 1: Grunnleggende om allocated memory, reservations, limits, shares og resource pools. http://www.van-lieshout.com/2009/04/esx-memory-management-part-1/ [...]

  19. #19 The Return of Virtualization Short Takes - blog.scottlowe.org - The weblog of an IT pro specializing in virtualization, storage, and servers
    on Jun 5th, 2009 at 6:43 pm

    [...] by a series of blog posts by Arnim van Lieshout on VMware ESX memory management (Part 1, Part 2, and Part 3), Scott Herold decided to join the fray with this blog post. Both Scott’s [...]

  20. #20 New Web Content Actually Worth Reading (June 2009) at Helge Klein
    on Jun 9th, 2009 at 11:16 pm

    [...] Arnim van Lieshout has written a great 3-part series on memory management in VMware ESX. Start reading it here. [...]

  21. #21 VMware: Terminal server performance tuning on VMware ESX | VMpros.nl
    on Sep 17th, 2009 at 6:45 pm

    [...] ESX Memory Management – Part 1 [...]

  22. #22 Happy 1st Blogiversary to me | Arnim van Lieshout
    on Dec 19th, 2009 at 12:04 pm

    [...] ESX Memory Management – Part 1 [...]

  23. #23 Support your favourite blog. Vote Now! | Arnim van Lieshout
    on Jan 6th, 2010 at 10:50 am

    [...] ESX Memory Management – Part 1, Part 2 & Part 3 [...]

  24. #24 Diagram: ESX Memory Management and Monitoring v1.0 | HyperViZor
    on Jan 22nd, 2010 at 9:51 pm

    [...] Resource Management in VMware ESX Server – by Carl A. Waldspurger – Arnim van Lieshout Blog (Part-1 , Part-2, [...]

  25. #25 New Web Content Actually Worth Reading (June 2009) | Sepago
    on Apr 28th, 2010 at 3:58 pm

    [...] Arnim van Lieshout has written a great 3-part series on memory management in VMware ESX. Start reading it here. [...]

  26. #26 VCAP-DCA Study notes – 3.1 Tune and Optimize vSphere Performance | www.vExperienced.co.uk
    on Apr 16th, 2011 at 12:32 pm

    [...] this great series of blog posts from Arnim Van Lieshout on memory management – part one, two and three. And as always the Frank Denneman [...]

  27. #27 ESX Memory Management – Part 3 | Digital Beermat
    on Jun 15th, 2012 at 12:23 pm

    [...] The ESX kernel uses transparent page sharing, ballooning and swapping to reclaim memory. Ballooning and swapping are used only when the host is running out of machine memory or a VM limit is hit (see also Limits I discussed in part 1). [...]

  28. #28 Memory Behavior when VM Limits are Set | Digital Beermat
    on Aug 3rd, 2012 at 12:59 pm

    [...] 2 years ago, there was a community conversation that was kicked off from Arnim Van Lieshout’s blog post on memory management.  Over 31,000 blog hits later, this topic still remains one of the most [...]

  29. #29 VCAP5-DCA Objective 6.2 – Troubleshoot CPU and memory performance « Adventures in a Virtual World
    on Sep 10th, 2012 at 8:09 pm

    [...] Another great explanation in 3 posts: http://www.van-lieshout.com/2009/04/esx-memory-management-part-1/ [...]

Leave a Comment